00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 597 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3263 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.089 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.108 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.136 Using shallow fetch with depth 1 00:00:00.136 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.136 > git --version # timeout=10 00:00:00.158 > git --version # 'git version 2.39.2' 00:00:00.158 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.178 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.178 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.488 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.498 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.508 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.508 > git config core.sparsecheckout # timeout=10 00:00:04.517 > git read-tree -mu HEAD # timeout=10 00:00:04.532 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.548 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.548 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.632 [Pipeline] Start of Pipeline 00:00:04.650 [Pipeline] library 00:00:04.652 Loading library shm_lib@master 00:00:04.652 Library shm_lib@master is cached. Copying from home. 00:00:04.670 [Pipeline] node 00:00:04.679 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.681 [Pipeline] { 00:00:04.691 [Pipeline] catchError 00:00:04.692 [Pipeline] { 00:00:04.700 [Pipeline] wrap 00:00:04.706 [Pipeline] { 00:00:04.713 [Pipeline] stage 00:00:04.714 [Pipeline] { (Prologue) 00:00:04.728 [Pipeline] echo 00:00:04.730 Node: VM-host-SM0 00:00:04.735 [Pipeline] cleanWs 00:00:06.828 [WS-CLEANUP] Deleting project workspace... 00:00:06.828 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.833 [WS-CLEANUP] done 00:00:07.016 [Pipeline] setCustomBuildProperty 00:00:07.076 [Pipeline] httpRequest 00:00:07.101 [Pipeline] echo 00:00:07.102 Sorcerer 10.211.164.101 is alive 00:00:07.108 [Pipeline] httpRequest 00:00:07.112 HttpMethod: GET 00:00:07.113 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.113 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.129 Response Code: HTTP/1.1 200 OK 00:00:07.130 Success: Status code 200 is in the accepted range: 200,404 00:00:07.130 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:12.189 [Pipeline] sh 00:00:12.476 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:12.490 [Pipeline] httpRequest 00:00:12.519 [Pipeline] echo 00:00:12.521 Sorcerer 10.211.164.101 is alive 00:00:12.526 [Pipeline] httpRequest 00:00:12.530 HttpMethod: GET 00:00:12.530 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:12.530 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:12.549 Response Code: HTTP/1.1 200 OK 00:00:12.549 Success: Status code 200 is in the accepted range: 200,404 00:00:12.549 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:50.985 [Pipeline] sh 00:00:51.265 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:53.812 [Pipeline] sh 00:00:54.094 + git -C spdk log --oneline -n5 00:00:54.094 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:54.094 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:54.094 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:54.094 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:54.094 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:54.115 [Pipeline] withCredentials 00:00:54.125 > git --version # timeout=10 00:00:54.137 > git --version # 'git version 2.39.2' 00:00:54.153 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:54.156 [Pipeline] { 00:00:54.165 [Pipeline] retry 00:00:54.168 [Pipeline] { 00:00:54.185 [Pipeline] sh 00:00:54.465 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:54.478 [Pipeline] } 00:00:54.499 [Pipeline] // retry 00:00:54.505 [Pipeline] } 00:00:54.525 [Pipeline] // withCredentials 00:00:54.536 [Pipeline] httpRequest 00:00:54.561 [Pipeline] echo 00:00:54.563 Sorcerer 10.211.164.101 is alive 00:00:54.572 [Pipeline] httpRequest 00:00:54.576 HttpMethod: GET 00:00:54.577 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:54.577 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:54.583 Response Code: HTTP/1.1 200 OK 00:00:54.584 Success: Status code 200 is in the accepted range: 200,404 00:00:54.584 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:23.307 [Pipeline] sh 00:01:23.586 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:24.971 [Pipeline] sh 00:01:25.338 + git -C dpdk log --oneline -n5 00:01:25.338 eeb0605f11 version: 23.11.0 00:01:25.338 238778122a doc: update release notes for 23.11 00:01:25.338 46aa6b3cfc doc: fix description of RSS features 00:01:25.338 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:25.338 7e421ae345 devtools: support skipping forbid rule check 00:01:25.354 [Pipeline] writeFile 00:01:25.367 [Pipeline] sh 00:01:25.642 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:25.654 [Pipeline] sh 00:01:25.932 + cat autorun-spdk.conf 00:01:25.932 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.932 SPDK_TEST_NVMF=1 00:01:25.932 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.932 SPDK_TEST_USDT=1 00:01:25.932 SPDK_RUN_UBSAN=1 00:01:25.932 SPDK_TEST_NVMF_MDNS=1 00:01:25.932 NET_TYPE=virt 00:01:25.932 SPDK_JSONRPC_GO_CLIENT=1 00:01:25.932 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:25.932 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:25.932 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.938 RUN_NIGHTLY=1 00:01:25.940 [Pipeline] } 00:01:25.956 [Pipeline] // stage 00:01:25.973 [Pipeline] stage 00:01:25.976 [Pipeline] { (Run VM) 00:01:25.991 [Pipeline] sh 00:01:26.268 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:26.268 + echo 'Start stage prepare_nvme.sh' 00:01:26.269 Start stage prepare_nvme.sh 00:01:26.269 + [[ -n 2 ]] 00:01:26.269 + disk_prefix=ex2 00:01:26.269 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:26.269 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:26.269 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:26.269 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.269 ++ SPDK_TEST_NVMF=1 00:01:26.269 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.269 ++ SPDK_TEST_USDT=1 00:01:26.269 ++ SPDK_RUN_UBSAN=1 00:01:26.269 ++ SPDK_TEST_NVMF_MDNS=1 00:01:26.269 ++ NET_TYPE=virt 00:01:26.269 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:26.269 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:26.269 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:26.269 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.269 ++ RUN_NIGHTLY=1 00:01:26.269 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:26.269 + nvme_files=() 00:01:26.269 + declare -A nvme_files 00:01:26.269 + backend_dir=/var/lib/libvirt/images/backends 00:01:26.269 + nvme_files['nvme.img']=5G 00:01:26.269 + nvme_files['nvme-cmb.img']=5G 00:01:26.269 + nvme_files['nvme-multi0.img']=4G 00:01:26.269 + nvme_files['nvme-multi1.img']=4G 00:01:26.269 + nvme_files['nvme-multi2.img']=4G 00:01:26.269 + nvme_files['nvme-openstack.img']=8G 00:01:26.269 + nvme_files['nvme-zns.img']=5G 00:01:26.269 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:26.269 + (( SPDK_TEST_FTL == 1 )) 00:01:26.269 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:26.269 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:26.269 + for nvme in "${!nvme_files[@]}" 00:01:26.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:26.269 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.269 + for nvme in "${!nvme_files[@]}" 00:01:26.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:26.269 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.269 + for nvme in "${!nvme_files[@]}" 00:01:26.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:26.269 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:26.269 + for nvme in "${!nvme_files[@]}" 00:01:26.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:26.269 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.269 + for nvme in "${!nvme_files[@]}" 00:01:26.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:26.269 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.269 + for nvme in "${!nvme_files[@]}" 00:01:26.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:26.269 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.269 + for nvme in "${!nvme_files[@]}" 00:01:26.269 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:26.269 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.527 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:26.527 + echo 'End stage prepare_nvme.sh' 00:01:26.527 End stage prepare_nvme.sh 00:01:26.537 [Pipeline] sh 00:01:26.817 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:26.817 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:01:26.817 00:01:26.817 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:26.817 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:26.817 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:26.817 HELP=0 00:01:26.817 DRY_RUN=0 00:01:26.817 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:26.817 NVME_DISKS_TYPE=nvme,nvme, 00:01:26.817 NVME_AUTO_CREATE=0 00:01:26.817 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:26.817 NVME_CMB=,, 00:01:26.817 NVME_PMR=,, 00:01:26.817 NVME_ZNS=,, 00:01:26.817 NVME_MS=,, 00:01:26.817 NVME_FDP=,, 00:01:26.817 SPDK_VAGRANT_DISTRO=fedora38 00:01:26.817 SPDK_VAGRANT_VMCPU=10 00:01:26.817 SPDK_VAGRANT_VMRAM=12288 00:01:26.817 SPDK_VAGRANT_PROVIDER=libvirt 00:01:26.817 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:26.817 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:26.817 SPDK_OPENSTACK_NETWORK=0 00:01:26.817 VAGRANT_PACKAGE_BOX=0 00:01:26.817 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:26.817 FORCE_DISTRO=true 00:01:26.817 VAGRANT_BOX_VERSION= 00:01:26.817 EXTRA_VAGRANTFILES= 00:01:26.817 NIC_MODEL=e1000 00:01:26.817 00:01:26.817 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:26.817 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:29.347 Bringing machine 'default' up with 'libvirt' provider... 00:01:30.282 ==> default: Creating image (snapshot of base box volume). 00:01:30.282 ==> default: Creating domain with the following settings... 00:01:30.282 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720829416_2a33a2ed0a5f7422b697 00:01:30.282 ==> default: -- Domain type: kvm 00:01:30.282 ==> default: -- Cpus: 10 00:01:30.282 ==> default: -- Feature: acpi 00:01:30.282 ==> default: -- Feature: apic 00:01:30.282 ==> default: -- Feature: pae 00:01:30.282 ==> default: -- Memory: 12288M 00:01:30.282 ==> default: -- Memory Backing: hugepages: 00:01:30.282 ==> default: -- Management MAC: 00:01:30.282 ==> default: -- Loader: 00:01:30.282 ==> default: -- Nvram: 00:01:30.282 ==> default: -- Base box: spdk/fedora38 00:01:30.282 ==> default: -- Storage pool: default 00:01:30.282 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720829416_2a33a2ed0a5f7422b697.img (20G) 00:01:30.282 ==> default: -- Volume Cache: default 00:01:30.282 ==> default: -- Kernel: 00:01:30.282 ==> default: -- Initrd: 00:01:30.282 ==> default: -- Graphics Type: vnc 00:01:30.282 ==> default: -- Graphics Port: -1 00:01:30.282 ==> default: -- Graphics IP: 127.0.0.1 00:01:30.282 ==> default: -- Graphics Password: Not defined 00:01:30.282 ==> default: -- Video Type: cirrus 00:01:30.282 ==> default: -- Video VRAM: 9216 00:01:30.282 ==> default: -- Sound Type: 00:01:30.282 ==> default: -- Keymap: en-us 00:01:30.282 ==> default: -- TPM Path: 00:01:30.282 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:30.282 ==> default: -- Command line args: 00:01:30.282 ==> default: -> value=-device, 00:01:30.282 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:30.282 ==> default: -> value=-drive, 00:01:30.282 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:30.282 ==> default: -> value=-device, 00:01:30.282 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.282 ==> default: -> value=-device, 00:01:30.282 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:30.282 ==> default: -> value=-drive, 00:01:30.282 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:30.282 ==> default: -> value=-device, 00:01:30.282 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.282 ==> default: -> value=-drive, 00:01:30.282 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:30.282 ==> default: -> value=-device, 00:01:30.282 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.282 ==> default: -> value=-drive, 00:01:30.282 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:30.282 ==> default: -> value=-device, 00:01:30.282 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.541 ==> default: Creating shared folders metadata... 00:01:30.541 ==> default: Starting domain. 00:01:33.070 ==> default: Waiting for domain to get an IP address... 00:01:47.966 ==> default: Waiting for SSH to become available... 00:01:49.340 ==> default: Configuring and enabling network interfaces... 00:01:53.525 default: SSH address: 192.168.121.66:22 00:01:53.525 default: SSH username: vagrant 00:01:53.525 default: SSH auth method: private key 00:01:56.058 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:02.618 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:09.181 ==> default: Mounting SSHFS shared folder... 00:02:10.112 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:10.112 ==> default: Checking Mount.. 00:02:11.487 ==> default: Folder Successfully Mounted! 00:02:11.487 ==> default: Running provisioner: file... 00:02:12.052 default: ~/.gitconfig => .gitconfig 00:02:12.617 00:02:12.617 SUCCESS! 00:02:12.617 00:02:12.617 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:12.617 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:12.617 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:12.617 00:02:12.625 [Pipeline] } 00:02:12.645 [Pipeline] // stage 00:02:12.655 [Pipeline] dir 00:02:12.656 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:02:12.658 [Pipeline] { 00:02:12.672 [Pipeline] catchError 00:02:12.674 [Pipeline] { 00:02:12.690 [Pipeline] sh 00:02:12.968 + vagrant ssh-config --host vagrant 00:02:12.968 + sed -ne /^Host/,$p 00:02:12.968 + tee ssh_conf 00:02:16.263 Host vagrant 00:02:16.263 HostName 192.168.121.66 00:02:16.263 User vagrant 00:02:16.263 Port 22 00:02:16.263 UserKnownHostsFile /dev/null 00:02:16.263 StrictHostKeyChecking no 00:02:16.263 PasswordAuthentication no 00:02:16.263 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:16.263 IdentitiesOnly yes 00:02:16.263 LogLevel FATAL 00:02:16.263 ForwardAgent yes 00:02:16.263 ForwardX11 yes 00:02:16.263 00:02:16.287 [Pipeline] withEnv 00:02:16.290 [Pipeline] { 00:02:16.305 [Pipeline] sh 00:02:16.584 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:16.584 source /etc/os-release 00:02:16.584 [[ -e /image.version ]] && img=$(< /image.version) 00:02:16.584 # Minimal, systemd-like check. 00:02:16.584 if [[ -e /.dockerenv ]]; then 00:02:16.584 # Clear garbage from the node's name: 00:02:16.584 # agt-er_autotest_547-896 -> autotest_547-896 00:02:16.584 # $HOSTNAME is the actual container id 00:02:16.584 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:16.584 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:16.584 # We can assume this is a mount from a host where container is running, 00:02:16.584 # so fetch its hostname to easily identify the target swarm worker. 00:02:16.584 container="$(< /etc/hostname) ($agent)" 00:02:16.584 else 00:02:16.584 # Fallback 00:02:16.584 container=$agent 00:02:16.584 fi 00:02:16.584 fi 00:02:16.584 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:16.584 00:02:16.595 [Pipeline] } 00:02:16.615 [Pipeline] // withEnv 00:02:16.625 [Pipeline] setCustomBuildProperty 00:02:16.641 [Pipeline] stage 00:02:16.644 [Pipeline] { (Tests) 00:02:16.666 [Pipeline] sh 00:02:16.945 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:17.218 [Pipeline] sh 00:02:17.495 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:17.769 [Pipeline] timeout 00:02:17.769 Timeout set to expire in 40 min 00:02:17.771 [Pipeline] { 00:02:17.788 [Pipeline] sh 00:02:18.067 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:18.634 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:02:18.649 [Pipeline] sh 00:02:18.929 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:19.205 [Pipeline] sh 00:02:19.489 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:19.766 [Pipeline] sh 00:02:20.046 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:20.046 ++ readlink -f spdk_repo 00:02:20.046 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:20.046 + [[ -n /home/vagrant/spdk_repo ]] 00:02:20.046 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:20.046 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:20.046 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:20.046 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:20.046 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:20.046 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:20.046 + cd /home/vagrant/spdk_repo 00:02:20.046 + source /etc/os-release 00:02:20.046 ++ NAME='Fedora Linux' 00:02:20.046 ++ VERSION='38 (Cloud Edition)' 00:02:20.046 ++ ID=fedora 00:02:20.046 ++ VERSION_ID=38 00:02:20.046 ++ VERSION_CODENAME= 00:02:20.046 ++ PLATFORM_ID=platform:f38 00:02:20.046 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:20.046 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:20.046 ++ LOGO=fedora-logo-icon 00:02:20.046 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:20.046 ++ HOME_URL=https://fedoraproject.org/ 00:02:20.046 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:20.046 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:20.046 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:20.046 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:20.046 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:20.046 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:20.046 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:20.046 ++ SUPPORT_END=2024-05-14 00:02:20.046 ++ VARIANT='Cloud Edition' 00:02:20.046 ++ VARIANT_ID=cloud 00:02:20.046 + uname -a 00:02:20.046 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:20.046 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:20.304 Hugepages 00:02:20.304 node hugesize free / total 00:02:20.304 node0 1048576kB 0 / 0 00:02:20.304 node0 2048kB 0 / 0 00:02:20.304 00:02:20.304 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.304 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:20.304 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:20.304 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:20.304 + rm -f /tmp/spdk-ld-path 00:02:20.304 + source autorun-spdk.conf 00:02:20.304 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.304 ++ SPDK_TEST_NVMF=1 00:02:20.304 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.304 ++ SPDK_TEST_USDT=1 00:02:20.304 ++ SPDK_RUN_UBSAN=1 00:02:20.304 ++ SPDK_TEST_NVMF_MDNS=1 00:02:20.304 ++ NET_TYPE=virt 00:02:20.304 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:20.304 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:20.304 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:20.304 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.304 ++ RUN_NIGHTLY=1 00:02:20.304 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.304 + [[ -n '' ]] 00:02:20.304 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:20.304 + for M in /var/spdk/build-*-manifest.txt 00:02:20.304 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:20.304 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.304 + for M in /var/spdk/build-*-manifest.txt 00:02:20.304 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:20.304 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.563 ++ uname 00:02:20.563 + [[ Linux == \L\i\n\u\x ]] 00:02:20.563 + sudo dmesg -T 00:02:20.563 + sudo dmesg --clear 00:02:20.563 + dmesg_pid=5864 00:02:20.563 + [[ Fedora Linux == FreeBSD ]] 00:02:20.563 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.563 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.563 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:20.563 + sudo dmesg -Tw 00:02:20.563 + [[ -x /usr/src/fio-static/fio ]] 00:02:20.563 + export FIO_BIN=/usr/src/fio-static/fio 00:02:20.563 + FIO_BIN=/usr/src/fio-static/fio 00:02:20.563 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:20.563 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:20.563 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:20.563 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.563 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.563 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:20.563 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.563 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.563 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.563 Test configuration: 00:02:20.563 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.563 SPDK_TEST_NVMF=1 00:02:20.563 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.563 SPDK_TEST_USDT=1 00:02:20.563 SPDK_RUN_UBSAN=1 00:02:20.563 SPDK_TEST_NVMF_MDNS=1 00:02:20.563 NET_TYPE=virt 00:02:20.563 SPDK_JSONRPC_GO_CLIENT=1 00:02:20.563 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:20.563 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:20.563 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.563 RUN_NIGHTLY=1 00:11:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:20.563 00:11:07 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:20.563 00:11:07 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.563 00:11:07 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.563 00:11:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.563 00:11:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.563 00:11:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.563 00:11:07 -- paths/export.sh@5 -- $ export PATH 00:02:20.563 00:11:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.563 00:11:07 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:20.563 00:11:07 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:20.564 00:11:07 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720829467.XXXXXX 00:02:20.564 00:11:07 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720829467.3eWoZ5 00:02:20.564 00:11:07 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:20.564 00:11:07 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:02:20.564 00:11:07 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:20.564 00:11:07 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:20.564 00:11:07 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:20.564 00:11:07 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:20.564 00:11:07 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:20.564 00:11:07 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:20.564 00:11:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.564 00:11:07 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:20.564 00:11:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:20.564 00:11:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:20.564 00:11:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:20.564 00:11:07 -- spdk/autobuild.sh@16 -- $ date -u 00:02:20.564 Sat Jul 13 12:11:07 AM UTC 2024 00:02:20.564 00:11:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:20.564 LTS-59-g4b94202c6 00:02:20.564 00:11:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:20.564 00:11:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:20.564 00:11:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:20.564 00:11:07 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:20.564 00:11:07 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:20.564 00:11:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.564 ************************************ 00:02:20.564 START TEST ubsan 00:02:20.564 ************************************ 00:02:20.564 using ubsan 00:02:20.564 00:11:07 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:20.564 00:02:20.564 real 0m0.000s 00:02:20.564 user 0m0.000s 00:02:20.564 sys 0m0.000s 00:02:20.564 00:11:07 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:20.564 00:11:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.564 ************************************ 00:02:20.564 END TEST ubsan 00:02:20.564 ************************************ 00:02:20.564 00:11:07 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:20.564 00:11:07 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:20.564 00:11:07 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:20.564 00:11:07 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:20.564 00:11:07 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:20.564 00:11:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.564 ************************************ 00:02:20.564 START TEST build_native_dpdk 00:02:20.564 ************************************ 00:02:20.564 00:11:07 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:20.564 00:11:07 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:20.564 00:11:07 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:20.564 00:11:07 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:20.564 00:11:07 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:20.564 00:11:07 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:20.564 00:11:07 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:20.564 00:11:07 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:20.564 00:11:07 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:20.564 00:11:07 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:20.564 00:11:07 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:20.564 00:11:07 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:20.564 00:11:07 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:20.564 00:11:07 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:20.564 00:11:07 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:20.564 00:11:07 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:20.564 00:11:07 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:20.564 00:11:07 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:20.564 00:11:07 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:20.564 00:11:07 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:20.564 00:11:07 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:20.875 eeb0605f11 version: 23.11.0 00:02:20.875 238778122a doc: update release notes for 23.11 00:02:20.875 46aa6b3cfc doc: fix description of RSS features 00:02:20.875 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:20.875 7e421ae345 devtools: support skipping forbid rule check 00:02:20.875 00:11:07 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:20.875 00:11:07 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:20.875 00:11:07 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:20.875 00:11:07 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:20.875 00:11:07 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:20.875 00:11:07 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:20.875 00:11:07 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:20.875 00:11:07 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:20.875 00:11:07 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:20.875 00:11:07 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:20.875 00:11:07 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:20.875 00:11:07 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:20.875 00:11:07 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:20.875 00:11:07 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:20.875 00:11:07 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:20.875 00:11:07 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:20.875 00:11:07 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:20.875 00:11:07 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:20.876 00:11:07 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:20.876 00:11:07 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:20.876 00:11:07 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:20.876 00:11:07 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:20.876 00:11:07 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:20.876 00:11:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:20.876 00:11:07 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:20.876 00:11:07 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:20.876 00:11:07 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:20.876 00:11:07 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:20.876 00:11:07 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:20.876 00:11:07 -- scripts/common.sh@343 -- $ case "$op" in 00:02:20.876 00:11:07 -- scripts/common.sh@344 -- $ : 1 00:02:20.876 00:11:07 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:20.876 00:11:07 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:20.876 00:11:07 -- scripts/common.sh@364 -- $ decimal 23 00:02:20.876 00:11:07 -- scripts/common.sh@352 -- $ local d=23 00:02:20.876 00:11:07 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:20.876 00:11:07 -- scripts/common.sh@354 -- $ echo 23 00:02:20.876 00:11:07 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:20.876 00:11:07 -- scripts/common.sh@365 -- $ decimal 21 00:02:20.876 00:11:07 -- scripts/common.sh@352 -- $ local d=21 00:02:20.876 00:11:07 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:20.876 00:11:07 -- scripts/common.sh@354 -- $ echo 21 00:02:20.876 00:11:07 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:20.876 00:11:07 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:20.876 00:11:07 -- scripts/common.sh@366 -- $ return 1 00:02:20.876 00:11:07 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:20.876 patching file config/rte_config.h 00:02:20.876 Hunk #1 succeeded at 60 (offset 1 line). 00:02:20.876 00:11:07 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:20.876 00:11:07 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:20.876 00:11:07 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:20.876 00:11:07 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:20.876 00:11:07 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:26.136 The Meson build system 00:02:26.136 Version: 1.3.1 00:02:26.136 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:26.136 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:26.136 Build type: native build 00:02:26.136 Program cat found: YES (/usr/bin/cat) 00:02:26.136 Project name: DPDK 00:02:26.136 Project version: 23.11.0 00:02:26.136 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:26.136 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:26.136 Host machine cpu family: x86_64 00:02:26.136 Host machine cpu: x86_64 00:02:26.136 Message: ## Building in Developer Mode ## 00:02:26.136 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:26.136 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:26.136 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:26.136 Program python3 found: YES (/usr/bin/python3) 00:02:26.136 Program cat found: YES (/usr/bin/cat) 00:02:26.136 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:26.136 Compiler for C supports arguments -march=native: YES 00:02:26.136 Checking for size of "void *" : 8 00:02:26.136 Checking for size of "void *" : 8 (cached) 00:02:26.136 Library m found: YES 00:02:26.136 Library numa found: YES 00:02:26.136 Has header "numaif.h" : YES 00:02:26.136 Library fdt found: NO 00:02:26.136 Library execinfo found: NO 00:02:26.137 Has header "execinfo.h" : YES 00:02:26.137 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:26.137 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:26.137 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:26.137 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:26.137 Run-time dependency openssl found: YES 3.0.9 00:02:26.137 Run-time dependency libpcap found: YES 1.10.4 00:02:26.137 Has header "pcap.h" with dependency libpcap: YES 00:02:26.137 Compiler for C supports arguments -Wcast-qual: YES 00:02:26.137 Compiler for C supports arguments -Wdeprecated: YES 00:02:26.137 Compiler for C supports arguments -Wformat: YES 00:02:26.137 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:26.137 Compiler for C supports arguments -Wformat-security: NO 00:02:26.137 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.137 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:26.137 Compiler for C supports arguments -Wnested-externs: YES 00:02:26.137 Compiler for C supports arguments -Wold-style-definition: YES 00:02:26.137 Compiler for C supports arguments -Wpointer-arith: YES 00:02:26.137 Compiler for C supports arguments -Wsign-compare: YES 00:02:26.137 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:26.137 Compiler for C supports arguments -Wundef: YES 00:02:26.137 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.137 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:26.137 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:26.137 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.137 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:26.137 Program objdump found: YES (/usr/bin/objdump) 00:02:26.137 Compiler for C supports arguments -mavx512f: YES 00:02:26.137 Checking if "AVX512 checking" compiles: YES 00:02:26.137 Fetching value of define "__SSE4_2__" : 1 00:02:26.137 Fetching value of define "__AES__" : 1 00:02:26.137 Fetching value of define "__AVX__" : 1 00:02:26.137 Fetching value of define "__AVX2__" : 1 00:02:26.137 Fetching value of define "__AVX512BW__" : (undefined) 00:02:26.137 Fetching value of define "__AVX512CD__" : (undefined) 00:02:26.137 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:26.137 Fetching value of define "__AVX512F__" : (undefined) 00:02:26.137 Fetching value of define "__AVX512VL__" : (undefined) 00:02:26.137 Fetching value of define "__PCLMUL__" : 1 00:02:26.137 Fetching value of define "__RDRND__" : 1 00:02:26.137 Fetching value of define "__RDSEED__" : 1 00:02:26.137 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:26.137 Fetching value of define "__znver1__" : (undefined) 00:02:26.137 Fetching value of define "__znver2__" : (undefined) 00:02:26.137 Fetching value of define "__znver3__" : (undefined) 00:02:26.137 Fetching value of define "__znver4__" : (undefined) 00:02:26.137 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:26.137 Message: lib/log: Defining dependency "log" 00:02:26.137 Message: lib/kvargs: Defining dependency "kvargs" 00:02:26.137 Message: lib/telemetry: Defining dependency "telemetry" 00:02:26.137 Checking for function "getentropy" : NO 00:02:26.137 Message: lib/eal: Defining dependency "eal" 00:02:26.137 Message: lib/ring: Defining dependency "ring" 00:02:26.137 Message: lib/rcu: Defining dependency "rcu" 00:02:26.137 Message: lib/mempool: Defining dependency "mempool" 00:02:26.137 Message: lib/mbuf: Defining dependency "mbuf" 00:02:26.137 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:26.137 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.137 Compiler for C supports arguments -mpclmul: YES 00:02:26.137 Compiler for C supports arguments -maes: YES 00:02:26.137 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.137 Compiler for C supports arguments -mavx512bw: YES 00:02:26.137 Compiler for C supports arguments -mavx512dq: YES 00:02:26.137 Compiler for C supports arguments -mavx512vl: YES 00:02:26.137 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:26.137 Compiler for C supports arguments -mavx2: YES 00:02:26.137 Compiler for C supports arguments -mavx: YES 00:02:26.137 Message: lib/net: Defining dependency "net" 00:02:26.137 Message: lib/meter: Defining dependency "meter" 00:02:26.137 Message: lib/ethdev: Defining dependency "ethdev" 00:02:26.137 Message: lib/pci: Defining dependency "pci" 00:02:26.137 Message: lib/cmdline: Defining dependency "cmdline" 00:02:26.137 Message: lib/metrics: Defining dependency "metrics" 00:02:26.137 Message: lib/hash: Defining dependency "hash" 00:02:26.137 Message: lib/timer: Defining dependency "timer" 00:02:26.137 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.137 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:26.137 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:26.137 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:26.137 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:26.137 Message: lib/acl: Defining dependency "acl" 00:02:26.137 Message: lib/bbdev: Defining dependency "bbdev" 00:02:26.137 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:26.137 Run-time dependency libelf found: YES 0.190 00:02:26.137 Message: lib/bpf: Defining dependency "bpf" 00:02:26.137 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:26.137 Message: lib/compressdev: Defining dependency "compressdev" 00:02:26.137 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:26.137 Message: lib/distributor: Defining dependency "distributor" 00:02:26.137 Message: lib/dmadev: Defining dependency "dmadev" 00:02:26.137 Message: lib/efd: Defining dependency "efd" 00:02:26.137 Message: lib/eventdev: Defining dependency "eventdev" 00:02:26.137 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:26.137 Message: lib/gpudev: Defining dependency "gpudev" 00:02:26.137 Message: lib/gro: Defining dependency "gro" 00:02:26.137 Message: lib/gso: Defining dependency "gso" 00:02:26.137 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:26.137 Message: lib/jobstats: Defining dependency "jobstats" 00:02:26.137 Message: lib/latencystats: Defining dependency "latencystats" 00:02:26.137 Message: lib/lpm: Defining dependency "lpm" 00:02:26.137 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.137 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:26.137 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:26.137 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:26.137 Message: lib/member: Defining dependency "member" 00:02:26.137 Message: lib/pcapng: Defining dependency "pcapng" 00:02:26.137 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:26.137 Message: lib/power: Defining dependency "power" 00:02:26.137 Message: lib/rawdev: Defining dependency "rawdev" 00:02:26.137 Message: lib/regexdev: Defining dependency "regexdev" 00:02:26.137 Message: lib/mldev: Defining dependency "mldev" 00:02:26.137 Message: lib/rib: Defining dependency "rib" 00:02:26.137 Message: lib/reorder: Defining dependency "reorder" 00:02:26.137 Message: lib/sched: Defining dependency "sched" 00:02:26.137 Message: lib/security: Defining dependency "security" 00:02:26.137 Message: lib/stack: Defining dependency "stack" 00:02:26.137 Has header "linux/userfaultfd.h" : YES 00:02:26.137 Has header "linux/vduse.h" : YES 00:02:26.137 Message: lib/vhost: Defining dependency "vhost" 00:02:26.137 Message: lib/ipsec: Defining dependency "ipsec" 00:02:26.137 Message: lib/pdcp: Defining dependency "pdcp" 00:02:26.137 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.137 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:26.137 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:26.137 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:26.137 Message: lib/fib: Defining dependency "fib" 00:02:26.137 Message: lib/port: Defining dependency "port" 00:02:26.137 Message: lib/pdump: Defining dependency "pdump" 00:02:26.137 Message: lib/table: Defining dependency "table" 00:02:26.137 Message: lib/pipeline: Defining dependency "pipeline" 00:02:26.137 Message: lib/graph: Defining dependency "graph" 00:02:26.137 Message: lib/node: Defining dependency "node" 00:02:26.137 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:27.511 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:27.512 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:27.512 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:27.512 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:27.512 Compiler for C supports arguments -Wno-unused-value: YES 00:02:27.512 Compiler for C supports arguments -Wno-format: YES 00:02:27.512 Compiler for C supports arguments -Wno-format-security: YES 00:02:27.512 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:27.512 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:27.512 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:27.512 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:27.512 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.512 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.512 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:27.512 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:27.512 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:27.512 Has header "sys/epoll.h" : YES 00:02:27.512 Program doxygen found: YES (/usr/bin/doxygen) 00:02:27.512 Configuring doxy-api-html.conf using configuration 00:02:27.512 Configuring doxy-api-man.conf using configuration 00:02:27.512 Program mandb found: YES (/usr/bin/mandb) 00:02:27.512 Program sphinx-build found: NO 00:02:27.512 Configuring rte_build_config.h using configuration 00:02:27.512 Message: 00:02:27.512 ================= 00:02:27.512 Applications Enabled 00:02:27.512 ================= 00:02:27.512 00:02:27.512 apps: 00:02:27.512 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:27.512 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:27.512 test-pmd, test-regex, test-sad, test-security-perf, 00:02:27.512 00:02:27.512 Message: 00:02:27.512 ================= 00:02:27.512 Libraries Enabled 00:02:27.512 ================= 00:02:27.512 00:02:27.512 libs: 00:02:27.512 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:27.512 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:27.512 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:27.512 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:27.512 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:27.512 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:27.512 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:27.512 00:02:27.512 00:02:27.512 Message: 00:02:27.512 =============== 00:02:27.512 Drivers Enabled 00:02:27.512 =============== 00:02:27.512 00:02:27.512 common: 00:02:27.512 00:02:27.512 bus: 00:02:27.512 pci, vdev, 00:02:27.512 mempool: 00:02:27.512 ring, 00:02:27.512 dma: 00:02:27.512 00:02:27.512 net: 00:02:27.512 i40e, 00:02:27.512 raw: 00:02:27.512 00:02:27.512 crypto: 00:02:27.512 00:02:27.512 compress: 00:02:27.512 00:02:27.512 regex: 00:02:27.512 00:02:27.512 ml: 00:02:27.512 00:02:27.512 vdpa: 00:02:27.512 00:02:27.512 event: 00:02:27.512 00:02:27.512 baseband: 00:02:27.512 00:02:27.512 gpu: 00:02:27.512 00:02:27.512 00:02:27.512 Message: 00:02:27.512 ================= 00:02:27.512 Content Skipped 00:02:27.512 ================= 00:02:27.512 00:02:27.512 apps: 00:02:27.512 00:02:27.512 libs: 00:02:27.512 00:02:27.512 drivers: 00:02:27.512 common/cpt: not in enabled drivers build config 00:02:27.512 common/dpaax: not in enabled drivers build config 00:02:27.512 common/iavf: not in enabled drivers build config 00:02:27.512 common/idpf: not in enabled drivers build config 00:02:27.512 common/mvep: not in enabled drivers build config 00:02:27.512 common/octeontx: not in enabled drivers build config 00:02:27.512 bus/auxiliary: not in enabled drivers build config 00:02:27.512 bus/cdx: not in enabled drivers build config 00:02:27.512 bus/dpaa: not in enabled drivers build config 00:02:27.512 bus/fslmc: not in enabled drivers build config 00:02:27.512 bus/ifpga: not in enabled drivers build config 00:02:27.512 bus/platform: not in enabled drivers build config 00:02:27.512 bus/vmbus: not in enabled drivers build config 00:02:27.512 common/cnxk: not in enabled drivers build config 00:02:27.512 common/mlx5: not in enabled drivers build config 00:02:27.512 common/nfp: not in enabled drivers build config 00:02:27.512 common/qat: not in enabled drivers build config 00:02:27.512 common/sfc_efx: not in enabled drivers build config 00:02:27.512 mempool/bucket: not in enabled drivers build config 00:02:27.512 mempool/cnxk: not in enabled drivers build config 00:02:27.512 mempool/dpaa: not in enabled drivers build config 00:02:27.512 mempool/dpaa2: not in enabled drivers build config 00:02:27.512 mempool/octeontx: not in enabled drivers build config 00:02:27.512 mempool/stack: not in enabled drivers build config 00:02:27.512 dma/cnxk: not in enabled drivers build config 00:02:27.512 dma/dpaa: not in enabled drivers build config 00:02:27.512 dma/dpaa2: not in enabled drivers build config 00:02:27.512 dma/hisilicon: not in enabled drivers build config 00:02:27.512 dma/idxd: not in enabled drivers build config 00:02:27.512 dma/ioat: not in enabled drivers build config 00:02:27.512 dma/skeleton: not in enabled drivers build config 00:02:27.512 net/af_packet: not in enabled drivers build config 00:02:27.512 net/af_xdp: not in enabled drivers build config 00:02:27.512 net/ark: not in enabled drivers build config 00:02:27.512 net/atlantic: not in enabled drivers build config 00:02:27.512 net/avp: not in enabled drivers build config 00:02:27.512 net/axgbe: not in enabled drivers build config 00:02:27.512 net/bnx2x: not in enabled drivers build config 00:02:27.512 net/bnxt: not in enabled drivers build config 00:02:27.512 net/bonding: not in enabled drivers build config 00:02:27.512 net/cnxk: not in enabled drivers build config 00:02:27.512 net/cpfl: not in enabled drivers build config 00:02:27.512 net/cxgbe: not in enabled drivers build config 00:02:27.512 net/dpaa: not in enabled drivers build config 00:02:27.512 net/dpaa2: not in enabled drivers build config 00:02:27.512 net/e1000: not in enabled drivers build config 00:02:27.512 net/ena: not in enabled drivers build config 00:02:27.512 net/enetc: not in enabled drivers build config 00:02:27.512 net/enetfec: not in enabled drivers build config 00:02:27.512 net/enic: not in enabled drivers build config 00:02:27.512 net/failsafe: not in enabled drivers build config 00:02:27.512 net/fm10k: not in enabled drivers build config 00:02:27.512 net/gve: not in enabled drivers build config 00:02:27.512 net/hinic: not in enabled drivers build config 00:02:27.512 net/hns3: not in enabled drivers build config 00:02:27.512 net/iavf: not in enabled drivers build config 00:02:27.512 net/ice: not in enabled drivers build config 00:02:27.512 net/idpf: not in enabled drivers build config 00:02:27.512 net/igc: not in enabled drivers build config 00:02:27.512 net/ionic: not in enabled drivers build config 00:02:27.512 net/ipn3ke: not in enabled drivers build config 00:02:27.512 net/ixgbe: not in enabled drivers build config 00:02:27.512 net/mana: not in enabled drivers build config 00:02:27.512 net/memif: not in enabled drivers build config 00:02:27.512 net/mlx4: not in enabled drivers build config 00:02:27.512 net/mlx5: not in enabled drivers build config 00:02:27.512 net/mvneta: not in enabled drivers build config 00:02:27.512 net/mvpp2: not in enabled drivers build config 00:02:27.512 net/netvsc: not in enabled drivers build config 00:02:27.512 net/nfb: not in enabled drivers build config 00:02:27.512 net/nfp: not in enabled drivers build config 00:02:27.512 net/ngbe: not in enabled drivers build config 00:02:27.512 net/null: not in enabled drivers build config 00:02:27.512 net/octeontx: not in enabled drivers build config 00:02:27.512 net/octeon_ep: not in enabled drivers build config 00:02:27.512 net/pcap: not in enabled drivers build config 00:02:27.512 net/pfe: not in enabled drivers build config 00:02:27.512 net/qede: not in enabled drivers build config 00:02:27.512 net/ring: not in enabled drivers build config 00:02:27.512 net/sfc: not in enabled drivers build config 00:02:27.512 net/softnic: not in enabled drivers build config 00:02:27.512 net/tap: not in enabled drivers build config 00:02:27.512 net/thunderx: not in enabled drivers build config 00:02:27.512 net/txgbe: not in enabled drivers build config 00:02:27.512 net/vdev_netvsc: not in enabled drivers build config 00:02:27.512 net/vhost: not in enabled drivers build config 00:02:27.512 net/virtio: not in enabled drivers build config 00:02:27.512 net/vmxnet3: not in enabled drivers build config 00:02:27.512 raw/cnxk_bphy: not in enabled drivers build config 00:02:27.512 raw/cnxk_gpio: not in enabled drivers build config 00:02:27.512 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:27.512 raw/ifpga: not in enabled drivers build config 00:02:27.512 raw/ntb: not in enabled drivers build config 00:02:27.512 raw/skeleton: not in enabled drivers build config 00:02:27.512 crypto/armv8: not in enabled drivers build config 00:02:27.512 crypto/bcmfs: not in enabled drivers build config 00:02:27.512 crypto/caam_jr: not in enabled drivers build config 00:02:27.512 crypto/ccp: not in enabled drivers build config 00:02:27.512 crypto/cnxk: not in enabled drivers build config 00:02:27.512 crypto/dpaa_sec: not in enabled drivers build config 00:02:27.512 crypto/dpaa2_sec: not in enabled drivers build config 00:02:27.512 crypto/ipsec_mb: not in enabled drivers build config 00:02:27.512 crypto/mlx5: not in enabled drivers build config 00:02:27.512 crypto/mvsam: not in enabled drivers build config 00:02:27.512 crypto/nitrox: not in enabled drivers build config 00:02:27.512 crypto/null: not in enabled drivers build config 00:02:27.512 crypto/octeontx: not in enabled drivers build config 00:02:27.512 crypto/openssl: not in enabled drivers build config 00:02:27.512 crypto/scheduler: not in enabled drivers build config 00:02:27.512 crypto/uadk: not in enabled drivers build config 00:02:27.512 crypto/virtio: not in enabled drivers build config 00:02:27.512 compress/isal: not in enabled drivers build config 00:02:27.512 compress/mlx5: not in enabled drivers build config 00:02:27.512 compress/octeontx: not in enabled drivers build config 00:02:27.512 compress/zlib: not in enabled drivers build config 00:02:27.512 regex/mlx5: not in enabled drivers build config 00:02:27.512 regex/cn9k: not in enabled drivers build config 00:02:27.512 ml/cnxk: not in enabled drivers build config 00:02:27.512 vdpa/ifc: not in enabled drivers build config 00:02:27.512 vdpa/mlx5: not in enabled drivers build config 00:02:27.512 vdpa/nfp: not in enabled drivers build config 00:02:27.512 vdpa/sfc: not in enabled drivers build config 00:02:27.512 event/cnxk: not in enabled drivers build config 00:02:27.513 event/dlb2: not in enabled drivers build config 00:02:27.513 event/dpaa: not in enabled drivers build config 00:02:27.513 event/dpaa2: not in enabled drivers build config 00:02:27.513 event/dsw: not in enabled drivers build config 00:02:27.513 event/opdl: not in enabled drivers build config 00:02:27.513 event/skeleton: not in enabled drivers build config 00:02:27.513 event/sw: not in enabled drivers build config 00:02:27.513 event/octeontx: not in enabled drivers build config 00:02:27.513 baseband/acc: not in enabled drivers build config 00:02:27.513 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:27.513 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:27.513 baseband/la12xx: not in enabled drivers build config 00:02:27.513 baseband/null: not in enabled drivers build config 00:02:27.513 baseband/turbo_sw: not in enabled drivers build config 00:02:27.513 gpu/cuda: not in enabled drivers build config 00:02:27.513 00:02:27.513 00:02:27.513 Build targets in project: 220 00:02:27.513 00:02:27.513 DPDK 23.11.0 00:02:27.513 00:02:27.513 User defined options 00:02:27.513 libdir : lib 00:02:27.513 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:27.513 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:27.513 c_link_args : 00:02:27.513 enable_docs : false 00:02:27.513 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:27.513 enable_kmods : false 00:02:27.513 machine : native 00:02:27.513 tests : false 00:02:27.513 00:02:27.513 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:27.513 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:27.513 00:11:14 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:27.513 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:27.771 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:27.771 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:27.771 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:27.771 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:27.771 [5/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:27.771 [6/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:27.771 [7/710] Linking static target lib/librte_kvargs.a 00:02:27.771 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:27.771 [9/710] Linking static target lib/librte_log.a 00:02:27.771 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:28.048 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.048 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:28.320 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.320 [14/710] Linking target lib/librte_log.so.24.0 00:02:28.320 [15/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:28.320 [16/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:28.320 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:28.320 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:28.577 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:28.577 [20/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:28.577 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:28.577 [22/710] Linking target lib/librte_kvargs.so.24.0 00:02:28.836 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:28.836 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:28.836 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:28.836 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:29.095 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:29.095 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:29.095 [29/710] Linking static target lib/librte_telemetry.a 00:02:29.095 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:29.095 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:29.095 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:29.354 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:29.354 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:29.354 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:29.354 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:29.354 [37/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.354 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:29.354 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:29.613 [40/710] Linking target lib/librte_telemetry.so.24.0 00:02:29.613 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:29.613 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:29.613 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:29.613 [44/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:29.871 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:29.871 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:29.871 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:30.130 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:30.130 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:30.130 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:30.130 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:30.130 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:30.130 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:30.390 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:30.390 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:30.390 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:30.390 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:30.649 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:30.649 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:30.649 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:30.649 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:30.649 [62/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:30.649 [63/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:30.649 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:30.907 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:30.907 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:30.907 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:31.166 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:31.166 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:31.166 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:31.424 [71/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:31.424 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:31.424 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:31.424 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:31.424 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:31.424 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:31.424 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:31.682 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:31.682 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:31.682 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:31.940 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:31.940 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:31.940 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:31.940 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:31.940 [85/710] Linking static target lib/librte_ring.a 00:02:32.197 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:32.197 [87/710] Linking static target lib/librte_eal.a 00:02:32.197 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:32.197 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.197 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:32.456 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:32.456 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:32.456 [93/710] Linking static target lib/librte_mempool.a 00:02:32.456 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:32.714 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:32.714 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:32.714 [97/710] Linking static target lib/librte_rcu.a 00:02:32.971 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:32.971 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:32.971 [100/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:32.971 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.971 [102/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:32.971 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.245 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:33.245 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:33.521 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:33.521 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:33.521 [108/710] Linking static target lib/librte_mbuf.a 00:02:33.521 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:33.521 [110/710] Linking static target lib/librte_net.a 00:02:33.780 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:33.780 [112/710] Linking static target lib/librte_meter.a 00:02:33.780 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:33.780 [114/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.780 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:34.038 [116/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.038 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:34.038 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:34.038 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.604 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:34.605 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:34.863 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:34.863 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:34.863 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:34.863 [125/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.863 [126/710] Linking static target lib/librte_pci.a 00:02:35.120 [127/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:35.121 [128/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:35.121 [129/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.121 [130/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:35.379 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:35.379 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:35.379 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:35.379 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:35.379 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:35.379 [136/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:35.379 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:35.379 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:35.379 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:35.637 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:35.637 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:35.637 [142/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:35.894 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:35.894 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:35.894 [145/710] Linking static target lib/librte_cmdline.a 00:02:36.152 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:36.152 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:36.152 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:36.152 [149/710] Linking static target lib/librte_metrics.a 00:02:36.152 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:36.410 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.669 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.669 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:36.669 [154/710] Linking static target lib/librte_timer.a 00:02:36.669 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:36.928 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.494 [157/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:37.495 [158/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:37.495 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:37.495 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:38.060 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:38.060 [162/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:38.060 [163/710] Linking static target lib/librte_ethdev.a 00:02:38.318 [164/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:38.318 [165/710] Linking static target lib/librte_bitratestats.a 00:02:38.318 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:38.318 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.318 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:38.318 [169/710] Linking static target lib/librte_bbdev.a 00:02:38.318 [170/710] Linking target lib/librte_eal.so.24.0 00:02:38.318 [171/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.577 [172/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:38.577 [173/710] Linking static target lib/librte_hash.a 00:02:38.577 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:38.577 [175/710] Linking target lib/librte_ring.so.24.0 00:02:38.835 [176/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:38.835 [177/710] Linking target lib/librte_rcu.so.24.0 00:02:38.835 [178/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:38.835 [179/710] Linking target lib/librte_mempool.so.24.0 00:02:38.835 [180/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:38.835 [181/710] Linking target lib/librte_meter.so.24.0 00:02:39.094 [182/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.094 [183/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:39.094 [184/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:39.094 [185/710] Linking target lib/librte_timer.so.24.0 00:02:39.094 [186/710] Linking target lib/librte_pci.so.24.0 00:02:39.094 [187/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:39.094 [188/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.094 [189/710] Linking static target lib/acl/libavx2_tmp.a 00:02:39.094 [190/710] Linking target lib/librte_mbuf.so.24.0 00:02:39.094 [191/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:39.094 [192/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:39.094 [193/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:39.094 [194/710] Linking static target lib/acl/libavx512_tmp.a 00:02:39.094 [195/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:39.094 [196/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:39.094 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:39.351 [198/710] Linking target lib/librte_net.so.24.0 00:02:39.351 [199/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:39.352 [200/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:39.352 [201/710] Linking target lib/librte_cmdline.so.24.0 00:02:39.352 [202/710] Linking static target lib/librte_acl.a 00:02:39.352 [203/710] Linking target lib/librte_hash.so.24.0 00:02:39.352 [204/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:39.609 [205/710] Linking target lib/librte_bbdev.so.24.0 00:02:39.609 [206/710] Linking static target lib/librte_cfgfile.a 00:02:39.609 [207/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:39.609 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:39.867 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.867 [210/710] Linking target lib/librte_acl.so.24.0 00:02:39.867 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:39.867 [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.867 [213/710] Linking target lib/librte_cfgfile.so.24.0 00:02:39.867 [214/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:39.867 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:40.125 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:40.125 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:40.384 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:40.384 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:40.642 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:40.642 [221/710] Linking static target lib/librte_bpf.a 00:02:40.642 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:40.642 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:40.642 [224/710] Linking static target lib/librte_compressdev.a 00:02:40.642 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:40.900 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.900 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:40.900 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:41.159 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:41.159 [230/710] Linking static target lib/librte_distributor.a 00:02:41.159 [231/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:41.159 [232/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.159 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:41.417 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.417 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:41.417 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:41.417 [237/710] Linking static target lib/librte_dmadev.a 00:02:41.417 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:42.010 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.010 [240/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:42.010 [241/710] Linking target lib/librte_dmadev.so.24.0 00:02:42.010 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:42.268 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:42.268 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:42.268 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:42.268 [246/710] Linking static target lib/librte_efd.a 00:02:42.526 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:42.526 [248/710] Linking static target lib/librte_cryptodev.a 00:02:42.526 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:42.526 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.785 [251/710] Linking target lib/librte_efd.so.24.0 00:02:43.042 [252/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:43.042 [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:43.042 [254/710] Linking static target lib/librte_dispatcher.a 00:02:43.042 [255/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.042 [256/710] Linking target lib/librte_ethdev.so.24.0 00:02:43.300 [257/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:43.300 [258/710] Linking static target lib/librte_gpudev.a 00:02:43.300 [259/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:43.300 [260/710] Linking target lib/librte_metrics.so.24.0 00:02:43.300 [261/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:43.300 [262/710] Linking target lib/librte_bpf.so.24.0 00:02:43.558 [263/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:43.558 [264/710] Linking target lib/librte_bitratestats.so.24.0 00:02:43.558 [265/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:43.558 [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:43.558 [267/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.558 [268/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:43.816 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:43.816 [270/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:43.816 [271/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.073 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:02:44.073 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:44.073 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.073 [275/710] Linking target lib/librte_gpudev.so.24.0 00:02:44.073 [276/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:44.331 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:44.331 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:44.331 [279/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:44.331 [280/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:44.331 [281/710] Linking static target lib/librte_gro.a 00:02:44.331 [282/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:44.331 [283/710] Linking static target lib/librte_eventdev.a 00:02:44.589 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:44.589 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:44.589 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.589 [287/710] Linking target lib/librte_gro.so.24.0 00:02:44.846 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:44.846 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:44.846 [290/710] Linking static target lib/librte_gso.a 00:02:45.105 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:45.105 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.105 [293/710] Linking target lib/librte_gso.so.24.0 00:02:45.105 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:45.105 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:45.105 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:45.363 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:45.363 [298/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:45.363 [299/710] Linking static target lib/librte_jobstats.a 00:02:45.363 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:45.363 [301/710] Linking static target lib/librte_ip_frag.a 00:02:45.621 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:45.621 [303/710] Linking static target lib/librte_latencystats.a 00:02:45.621 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.621 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:45.621 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.621 [307/710] Linking target lib/librte_ip_frag.so.24.0 00:02:45.621 [308/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:45.878 [309/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:45.878 [310/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:45.878 [311/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.878 [312/710] Linking target lib/librte_latencystats.so.24.0 00:02:45.878 [313/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:45.878 [314/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:45.878 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:45.878 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:46.135 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:46.392 [318/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:46.392 [319/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:46.392 [320/710] Linking static target lib/librte_lpm.a 00:02:46.392 [321/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.650 [322/710] Linking target lib/librte_eventdev.so.24.0 00:02:46.650 [323/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:46.650 [324/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:46.650 [325/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:46.650 [326/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:46.650 [327/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:46.650 [328/710] Linking target lib/librte_dispatcher.so.24.0 00:02:46.650 [329/710] Linking static target lib/librte_pcapng.a 00:02:46.650 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.907 [331/710] Linking target lib/librte_lpm.so.24.0 00:02:46.907 [332/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:46.907 [333/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:46.907 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:46.907 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.907 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:47.165 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:47.165 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:47.165 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:47.423 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:47.423 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:47.423 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:47.423 [343/710] Linking static target lib/librte_power.a 00:02:47.681 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:47.681 [345/710] Linking static target lib/librte_regexdev.a 00:02:47.681 [346/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:47.681 [347/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:47.681 [348/710] Linking static target lib/librte_rawdev.a 00:02:47.681 [349/710] Linking static target lib/librte_member.a 00:02:47.681 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:47.681 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:47.939 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:47.939 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.939 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:47.939 [355/710] Linking static target lib/librte_mldev.a 00:02:47.939 [356/710] Linking target lib/librte_member.so.24.0 00:02:47.939 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.197 [358/710] Linking target lib/librte_rawdev.so.24.0 00:02:48.197 [359/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:48.197 [360/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.197 [361/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:48.197 [362/710] Linking target lib/librte_power.so.24.0 00:02:48.197 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.197 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:48.455 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:48.714 [366/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:48.714 [367/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:48.714 [368/710] Linking static target lib/librte_reorder.a 00:02:48.714 [369/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:48.714 [370/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:48.714 [371/710] Linking static target lib/librte_rib.a 00:02:48.714 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:48.714 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:48.971 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:48.971 [375/710] Linking static target lib/librte_stack.a 00:02:48.971 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.971 [377/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:48.971 [378/710] Linking target lib/librte_reorder.so.24.0 00:02:48.971 [379/710] Linking static target lib/librte_security.a 00:02:48.971 [380/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.228 [381/710] Linking target lib/librte_stack.so.24.0 00:02:49.228 [382/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:49.228 [383/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.229 [384/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.229 [385/710] Linking target lib/librte_mldev.so.24.0 00:02:49.229 [386/710] Linking target lib/librte_rib.so.24.0 00:02:49.229 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:49.486 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.486 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:49.486 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:49.486 [391/710] Linking target lib/librte_security.so.24.0 00:02:49.486 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:49.486 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:49.744 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:49.744 [395/710] Linking static target lib/librte_sched.a 00:02:50.011 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:50.011 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.306 [398/710] Linking target lib/librte_sched.so.24.0 00:02:50.306 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:50.306 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:50.306 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:50.306 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:50.871 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:50.872 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:50.872 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:51.130 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:51.130 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:51.388 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:51.388 [409/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:51.388 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:51.388 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:51.646 [412/710] Linking static target lib/librte_ipsec.a 00:02:51.646 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:51.904 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.904 [415/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:51.904 [416/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:51.904 [417/710] Linking target lib/librte_ipsec.so.24.0 00:02:51.904 [418/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:51.904 [419/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:51.904 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:51.904 [421/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:51.904 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:51.904 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:52.838 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:52.838 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:52.838 [426/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:52.838 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:52.838 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:52.838 [429/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:53.097 [430/710] Linking static target lib/librte_fib.a 00:02:53.097 [431/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:53.097 [432/710] Linking static target lib/librte_pdcp.a 00:02:53.355 [433/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.355 [434/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.355 [435/710] Linking target lib/librte_fib.so.24.0 00:02:53.355 [436/710] Linking target lib/librte_pdcp.so.24.0 00:02:53.613 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:53.871 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:53.871 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:53.871 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:54.129 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:54.129 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:54.387 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:54.387 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:54.644 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:54.644 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:54.644 [447/710] Linking static target lib/librte_port.a 00:02:54.901 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:54.902 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:54.902 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:54.902 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:55.159 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.159 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:55.159 [454/710] Linking target lib/librte_port.so.24.0 00:02:55.159 [455/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:55.159 [456/710] Linking static target lib/librte_pdump.a 00:02:55.159 [457/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:55.159 [458/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:55.159 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:55.418 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.418 [461/710] Linking target lib/librte_pdump.so.24.0 00:02:55.418 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:55.982 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:55.982 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:55.982 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:55.982 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:55.982 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:56.239 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:56.496 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:56.496 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:56.496 [471/710] Linking static target lib/librte_table.a 00:02:56.496 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:56.752 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:57.009 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.009 [475/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:57.009 [476/710] Linking target lib/librte_table.so.24.0 00:02:57.267 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:57.267 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:57.267 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:57.524 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:57.780 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:58.038 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:58.038 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:58.038 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:58.038 [485/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:58.038 [486/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:58.603 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:58.603 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:58.603 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:58.603 [490/710] Linking static target lib/librte_graph.a 00:02:58.861 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:58.861 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:58.861 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:59.452 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:59.452 [495/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.452 [496/710] Linking target lib/librte_graph.so.24.0 00:02:59.452 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:59.452 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:59.452 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:00.016 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:00.016 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:00.016 [502/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:00.016 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:00.016 [504/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:00.016 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:00.273 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:00.530 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:00.530 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:00.788 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:00.788 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:00.788 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:00.788 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:00.788 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:01.045 [514/710] Linking static target lib/librte_node.a 00:03:01.045 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:01.302 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.302 [517/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:01.302 [518/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:01.302 [519/710] Linking target lib/librte_node.so.24.0 00:03:01.302 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:01.302 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:01.302 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:01.559 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.559 [524/710] Linking static target drivers/librte_bus_vdev.a 00:03:01.559 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:01.559 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.559 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:01.559 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.816 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.816 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.816 [531/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:01.816 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:01.816 [533/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:01.816 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:01.816 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:02.072 [536/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.072 [537/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:02.072 [538/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:02.072 [539/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:02.072 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:02.329 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:02.329 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.329 [543/710] Linking static target drivers/librte_mempool_ring.a 00:03:02.329 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.329 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:02.587 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:02.844 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:03.102 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:03.102 [549/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:03.102 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:03.102 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:04.036 [552/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:04.036 [553/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:04.036 [554/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:04.036 [555/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:04.036 [556/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:04.294 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:04.861 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:04.861 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:04.861 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:04.861 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:04.861 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:05.426 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:05.684 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:05.684 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:05.684 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:06.250 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:06.250 [568/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.250 [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:06.250 [570/710] Linking static target lib/librte_vhost.a 00:03:06.250 [571/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:06.250 [572/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:06.508 [573/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:06.508 [574/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:06.508 [575/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:06.765 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:06.765 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:07.022 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:07.022 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:07.022 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:07.280 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:07.280 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:07.537 [583/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.537 [584/710] Linking target lib/librte_vhost.so.24.0 00:03:07.537 [585/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:07.537 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:07.537 [587/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:07.537 [588/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:07.537 [589/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:07.795 [590/710] Linking static target drivers/librte_net_i40e.a 00:03:07.795 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:07.795 [592/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:07.795 [593/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:08.052 [594/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:08.311 [595/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:08.311 [596/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:08.311 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:08.574 [598/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.574 [599/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:08.833 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:08.833 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:09.105 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:09.105 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:09.105 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:09.105 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:09.363 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:09.363 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:09.929 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:09.929 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:09.929 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:09.929 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:09.929 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:10.187 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:10.187 [614/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:10.187 [615/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:10.187 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:10.187 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:10.753 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:10.753 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:10.753 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:11.012 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:11.012 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:11.269 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:11.835 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:12.095 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:12.095 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:12.095 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:12.354 [628/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:12.354 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:12.354 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:12.354 [631/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:12.612 [632/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:12.612 [633/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:12.870 [634/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:12.870 [635/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:12.870 [636/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:12.870 [637/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:13.128 [638/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:13.128 [639/710] Linking static target lib/librte_pipeline.a 00:03:13.128 [640/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:13.128 [641/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:13.386 [642/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:13.386 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:13.386 [644/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:13.644 [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:13.644 [646/710] Linking target app/dpdk-dumpcap 00:03:13.644 [647/710] Linking target app/dpdk-graph 00:03:13.902 [648/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:13.903 [649/710] Linking target app/dpdk-pdump 00:03:13.903 [650/710] Linking target app/dpdk-proc-info 00:03:13.903 [651/710] Linking target app/dpdk-test-acl 00:03:14.161 [652/710] Linking target app/dpdk-test-cmdline 00:03:14.161 [653/710] Linking target app/dpdk-test-compress-perf 00:03:14.161 [654/710] Linking target app/dpdk-test-crypto-perf 00:03:14.419 [655/710] Linking target app/dpdk-test-dma-perf 00:03:14.419 [656/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:14.419 [657/710] Linking target app/dpdk-test-eventdev 00:03:14.419 [658/710] Linking target app/dpdk-test-flow-perf 00:03:14.419 [659/710] Linking target app/dpdk-test-fib 00:03:14.677 [660/710] Linking target app/dpdk-test-gpudev 00:03:14.934 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:14.934 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:14.934 [663/710] Linking target app/dpdk-test-bbdev 00:03:14.934 [664/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:14.934 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:15.191 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:15.191 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:15.191 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:15.449 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:15.449 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:15.449 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:15.449 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:15.449 [673/710] Linking target app/dpdk-test-mldev 00:03:16.014 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:16.014 [675/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.014 [676/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:16.014 [677/710] Linking target lib/librte_pipeline.so.24.0 00:03:16.014 [678/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:16.271 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:16.529 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:16.529 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:16.786 [682/710] Linking target app/dpdk-test-pipeline 00:03:16.786 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:16.786 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:17.044 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:17.302 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:17.302 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:17.564 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:17.564 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:17.564 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:17.833 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:17.833 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:17.834 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:18.100 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:18.364 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:18.622 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:18.879 [697/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:18.879 [698/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:18.879 [699/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:19.137 [700/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:19.137 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:19.137 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:19.137 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:19.395 [704/710] Linking target app/dpdk-test-sad 00:03:19.652 [705/710] Linking target app/dpdk-test-regex 00:03:19.652 [706/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:19.652 [707/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:19.909 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:20.167 [709/710] Linking target app/dpdk-testpmd 00:03:20.425 [710/710] Linking target app/dpdk-test-security-perf 00:03:20.425 00:12:07 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:20.425 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:20.425 [0/1] Installing files. 00:03:20.686 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.686 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:20.687 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.688 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:20.689 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:20.690 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:20.690 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.690 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.948 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.948 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.948 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.948 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.948 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.948 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.948 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.948 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.948 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.948 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.949 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.210 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.210 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.210 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.210 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.210 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.210 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.210 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.210 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.210 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.210 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.210 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.210 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.211 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.212 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.213 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.213 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.213 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.213 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.213 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.213 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.213 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.213 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.213 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.213 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:21.213 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:21.213 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:21.213 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:21.213 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:21.213 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:21.213 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:21.213 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:21.213 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:21.213 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:21.213 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:21.213 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:21.213 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:21.213 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:21.213 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:21.213 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:21.213 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:21.213 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:21.213 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:21.213 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:21.213 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:21.213 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:21.213 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:21.213 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:21.213 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:21.213 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:21.213 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:21.213 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:21.213 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:21.213 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:21.213 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:21.213 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:21.213 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:21.213 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:21.213 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:21.213 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:21.213 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:21.213 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:21.213 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:21.213 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:21.213 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:21.213 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:21.213 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:21.213 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:21.213 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:21.213 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:21.213 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:21.213 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:21.213 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:21.213 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:21.213 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:21.213 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:21.213 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:21.213 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:21.213 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:21.213 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:21.213 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:21.213 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:21.213 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:21.213 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:21.213 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:21.213 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:21.213 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:21.213 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:21.213 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:21.213 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:21.213 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:21.213 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:21.213 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:21.213 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:21.213 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:21.213 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:21.213 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:21.213 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:21.213 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:21.213 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:21.213 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:21.213 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:21.213 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:21.213 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:21.213 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:21.213 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:21.213 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:21.213 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:21.213 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:21.213 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:21.213 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:21.213 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:21.213 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:21.213 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:21.213 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:21.213 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:21.213 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:21.213 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:21.213 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:21.213 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:21.213 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:21.213 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:21.213 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:21.213 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:21.213 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:21.213 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:21.213 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:21.213 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:21.213 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:21.214 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:21.214 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:21.214 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:21.214 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:21.214 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:21.214 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:21.214 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:21.214 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:21.214 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:21.214 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:21.214 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:21.214 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:21.214 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:21.214 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:21.214 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:21.214 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:21.214 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:21.214 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:21.214 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:21.214 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:21.214 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:21.214 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:21.214 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:21.214 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:21.214 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:21.214 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:21.214 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:21.214 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:21.214 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:21.214 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:21.472 00:12:08 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:21.472 00:12:08 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:21.472 00:12:08 -- common/autobuild_common.sh@200 -- $ cat 00:03:21.472 ************************************ 00:03:21.472 END TEST build_native_dpdk 00:03:21.472 ************************************ 00:03:21.472 00:12:08 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:21.472 00:03:21.472 real 1m0.681s 00:03:21.472 user 7m23.584s 00:03:21.472 sys 1m11.218s 00:03:21.472 00:12:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:21.472 00:12:08 -- common/autotest_common.sh@10 -- $ set +x 00:03:21.472 00:12:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:21.472 00:12:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:21.472 00:12:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:21.472 00:12:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:21.472 00:12:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:21.472 00:12:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:21.472 00:12:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:21.472 00:12:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:21.472 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:21.730 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.730 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:21.730 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:21.988 Using 'verbs' RDMA provider 00:03:37.797 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:49.989 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:49.989 go version go1.21.1 linux/amd64 00:03:49.989 Creating mk/config.mk...done. 00:03:49.989 Creating mk/cc.flags.mk...done. 00:03:49.989 Type 'make' to build. 00:03:49.989 00:12:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:49.989 00:12:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:49.989 00:12:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:49.989 00:12:35 -- common/autotest_common.sh@10 -- $ set +x 00:03:49.989 ************************************ 00:03:49.989 START TEST make 00:03:49.989 ************************************ 00:03:49.989 00:12:35 -- common/autotest_common.sh@1104 -- $ make -j10 00:03:49.989 make[1]: Nothing to be done for 'all'. 00:04:16.546 CC lib/ut_mock/mock.o 00:04:16.546 CC lib/ut/ut.o 00:04:16.546 CC lib/log/log.o 00:04:16.546 CC lib/log/log_flags.o 00:04:16.546 CC lib/log/log_deprecated.o 00:04:16.546 LIB libspdk_ut_mock.a 00:04:16.546 LIB libspdk_ut.a 00:04:16.546 SO libspdk_ut_mock.so.5.0 00:04:16.546 SO libspdk_ut.so.1.0 00:04:16.546 LIB libspdk_log.a 00:04:16.546 SO libspdk_log.so.6.1 00:04:16.546 SYMLINK libspdk_ut_mock.so 00:04:16.546 SYMLINK libspdk_ut.so 00:04:16.546 SYMLINK libspdk_log.so 00:04:16.546 CC lib/util/base64.o 00:04:16.546 CC lib/util/bit_array.o 00:04:16.546 CC lib/util/cpuset.o 00:04:16.546 CC lib/dma/dma.o 00:04:16.546 CC lib/util/crc16.o 00:04:16.546 CC lib/ioat/ioat.o 00:04:16.546 CC lib/util/crc32.o 00:04:16.546 CC lib/util/crc32c.o 00:04:16.546 CXX lib/trace_parser/trace.o 00:04:16.546 CC lib/vfio_user/host/vfio_user_pci.o 00:04:16.546 CC lib/util/crc32_ieee.o 00:04:16.546 CC lib/vfio_user/host/vfio_user.o 00:04:16.546 CC lib/util/crc64.o 00:04:16.546 CC lib/util/dif.o 00:04:16.546 LIB libspdk_dma.a 00:04:16.546 SO libspdk_dma.so.3.0 00:04:16.546 CC lib/util/fd.o 00:04:16.546 CC lib/util/file.o 00:04:16.546 SYMLINK libspdk_dma.so 00:04:16.546 CC lib/util/hexlify.o 00:04:16.546 LIB libspdk_ioat.a 00:04:16.546 CC lib/util/iov.o 00:04:16.546 CC lib/util/math.o 00:04:16.546 SO libspdk_ioat.so.6.0 00:04:16.546 CC lib/util/pipe.o 00:04:16.546 CC lib/util/strerror_tls.o 00:04:16.546 LIB libspdk_vfio_user.a 00:04:16.546 SYMLINK libspdk_ioat.so 00:04:16.546 CC lib/util/string.o 00:04:16.546 CC lib/util/uuid.o 00:04:16.546 SO libspdk_vfio_user.so.4.0 00:04:16.546 CC lib/util/fd_group.o 00:04:16.546 SYMLINK libspdk_vfio_user.so 00:04:16.546 CC lib/util/xor.o 00:04:16.546 CC lib/util/zipf.o 00:04:16.546 LIB libspdk_util.a 00:04:16.546 SO libspdk_util.so.8.0 00:04:16.546 SYMLINK libspdk_util.so 00:04:16.546 LIB libspdk_trace_parser.a 00:04:16.546 SO libspdk_trace_parser.so.4.0 00:04:16.546 CC lib/json/json_parse.o 00:04:16.546 CC lib/idxd/idxd.o 00:04:16.546 CC lib/idxd/idxd_user.o 00:04:16.546 CC lib/idxd/idxd_kernel.o 00:04:16.546 CC lib/env_dpdk/env.o 00:04:16.546 CC lib/env_dpdk/memory.o 00:04:16.546 CC lib/rdma/common.o 00:04:16.546 CC lib/vmd/vmd.o 00:04:16.546 CC lib/conf/conf.o 00:04:16.546 SYMLINK libspdk_trace_parser.so 00:04:16.546 CC lib/vmd/led.o 00:04:16.546 CC lib/env_dpdk/pci.o 00:04:16.546 CC lib/json/json_util.o 00:04:16.546 LIB libspdk_conf.a 00:04:16.546 CC lib/json/json_write.o 00:04:16.546 CC lib/env_dpdk/init.o 00:04:16.546 SO libspdk_conf.so.5.0 00:04:16.546 CC lib/rdma/rdma_verbs.o 00:04:16.546 SYMLINK libspdk_conf.so 00:04:16.546 CC lib/env_dpdk/threads.o 00:04:16.546 CC lib/env_dpdk/pci_ioat.o 00:04:16.547 CC lib/env_dpdk/pci_virtio.o 00:04:16.547 CC lib/env_dpdk/pci_vmd.o 00:04:16.547 LIB libspdk_rdma.a 00:04:16.547 SO libspdk_rdma.so.5.0 00:04:16.547 LIB libspdk_json.a 00:04:16.547 CC lib/env_dpdk/pci_idxd.o 00:04:16.547 LIB libspdk_idxd.a 00:04:16.547 SO libspdk_json.so.5.1 00:04:16.547 SO libspdk_idxd.so.11.0 00:04:16.547 SYMLINK libspdk_rdma.so 00:04:16.547 CC lib/env_dpdk/pci_event.o 00:04:16.547 CC lib/env_dpdk/sigbus_handler.o 00:04:16.547 LIB libspdk_vmd.a 00:04:16.547 CC lib/env_dpdk/pci_dpdk.o 00:04:16.547 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:16.547 SO libspdk_vmd.so.5.0 00:04:16.547 SYMLINK libspdk_json.so 00:04:16.547 SYMLINK libspdk_idxd.so 00:04:16.547 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:16.547 SYMLINK libspdk_vmd.so 00:04:16.547 CC lib/jsonrpc/jsonrpc_server.o 00:04:16.547 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:16.547 CC lib/jsonrpc/jsonrpc_client.o 00:04:16.547 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:16.547 LIB libspdk_jsonrpc.a 00:04:16.547 SO libspdk_jsonrpc.so.5.1 00:04:16.547 SYMLINK libspdk_jsonrpc.so 00:04:16.547 CC lib/rpc/rpc.o 00:04:16.547 LIB libspdk_env_dpdk.a 00:04:16.804 SO libspdk_env_dpdk.so.13.0 00:04:16.804 LIB libspdk_rpc.a 00:04:16.804 SO libspdk_rpc.so.5.0 00:04:16.804 SYMLINK libspdk_env_dpdk.so 00:04:16.804 SYMLINK libspdk_rpc.so 00:04:17.061 CC lib/notify/notify.o 00:04:17.061 CC lib/notify/notify_rpc.o 00:04:17.061 CC lib/trace/trace.o 00:04:17.061 CC lib/trace/trace_rpc.o 00:04:17.061 CC lib/trace/trace_flags.o 00:04:17.061 CC lib/sock/sock.o 00:04:17.061 CC lib/sock/sock_rpc.o 00:04:17.319 LIB libspdk_notify.a 00:04:17.319 SO libspdk_notify.so.5.0 00:04:17.319 LIB libspdk_trace.a 00:04:17.319 SO libspdk_trace.so.9.0 00:04:17.319 SYMLINK libspdk_notify.so 00:04:17.319 SYMLINK libspdk_trace.so 00:04:17.319 LIB libspdk_sock.a 00:04:17.577 SO libspdk_sock.so.8.0 00:04:17.577 SYMLINK libspdk_sock.so 00:04:17.577 CC lib/thread/thread.o 00:04:17.577 CC lib/thread/iobuf.o 00:04:17.835 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:17.835 CC lib/nvme/nvme_fabric.o 00:04:17.835 CC lib/nvme/nvme_ctrlr.o 00:04:17.835 CC lib/nvme/nvme_ns_cmd.o 00:04:17.835 CC lib/nvme/nvme_ns.o 00:04:17.835 CC lib/nvme/nvme_pcie_common.o 00:04:17.835 CC lib/nvme/nvme_qpair.o 00:04:17.835 CC lib/nvme/nvme_pcie.o 00:04:17.835 CC lib/nvme/nvme.o 00:04:18.402 CC lib/nvme/nvme_quirks.o 00:04:18.402 CC lib/nvme/nvme_transport.o 00:04:18.661 CC lib/nvme/nvme_discovery.o 00:04:18.661 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:18.661 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:18.661 CC lib/nvme/nvme_tcp.o 00:04:18.919 CC lib/nvme/nvme_opal.o 00:04:18.919 CC lib/nvme/nvme_io_msg.o 00:04:19.178 LIB libspdk_thread.a 00:04:19.178 CC lib/nvme/nvme_poll_group.o 00:04:19.178 SO libspdk_thread.so.9.0 00:04:19.178 SYMLINK libspdk_thread.so 00:04:19.178 CC lib/nvme/nvme_zns.o 00:04:19.178 CC lib/nvme/nvme_cuse.o 00:04:19.178 CC lib/accel/accel.o 00:04:19.178 CC lib/nvme/nvme_vfio_user.o 00:04:19.436 CC lib/nvme/nvme_rdma.o 00:04:19.436 CC lib/blob/blobstore.o 00:04:19.436 CC lib/blob/request.o 00:04:19.694 CC lib/blob/zeroes.o 00:04:19.990 CC lib/blob/blob_bs_dev.o 00:04:19.990 CC lib/accel/accel_rpc.o 00:04:19.990 CC lib/accel/accel_sw.o 00:04:20.248 CC lib/init/json_config.o 00:04:20.249 CC lib/init/subsystem.o 00:04:20.249 CC lib/virtio/virtio.o 00:04:20.249 CC lib/virtio/virtio_vhost_user.o 00:04:20.249 CC lib/virtio/virtio_vfio_user.o 00:04:20.249 CC lib/virtio/virtio_pci.o 00:04:20.249 LIB libspdk_accel.a 00:04:20.249 CC lib/init/subsystem_rpc.o 00:04:20.249 CC lib/init/rpc.o 00:04:20.249 SO libspdk_accel.so.14.0 00:04:20.506 SYMLINK libspdk_accel.so 00:04:20.506 LIB libspdk_init.a 00:04:20.506 SO libspdk_init.so.4.0 00:04:20.506 CC lib/bdev/bdev_rpc.o 00:04:20.506 CC lib/bdev/bdev_zone.o 00:04:20.506 CC lib/bdev/bdev.o 00:04:20.506 CC lib/bdev/part.o 00:04:20.506 CC lib/bdev/scsi_nvme.o 00:04:20.506 LIB libspdk_virtio.a 00:04:20.506 SYMLINK libspdk_init.so 00:04:20.506 SO libspdk_virtio.so.6.0 00:04:20.763 SYMLINK libspdk_virtio.so 00:04:20.763 CC lib/event/app.o 00:04:20.763 CC lib/event/reactor.o 00:04:20.763 CC lib/event/log_rpc.o 00:04:20.763 LIB libspdk_nvme.a 00:04:20.763 CC lib/event/app_rpc.o 00:04:20.763 CC lib/event/scheduler_static.o 00:04:21.021 SO libspdk_nvme.so.12.0 00:04:21.021 LIB libspdk_event.a 00:04:21.279 SO libspdk_event.so.12.0 00:04:21.279 SYMLINK libspdk_nvme.so 00:04:21.279 SYMLINK libspdk_event.so 00:04:22.211 LIB libspdk_blob.a 00:04:22.211 SO libspdk_blob.so.10.1 00:04:22.211 SYMLINK libspdk_blob.so 00:04:22.468 CC lib/lvol/lvol.o 00:04:22.468 CC lib/blobfs/blobfs.o 00:04:22.468 CC lib/blobfs/tree.o 00:04:23.035 LIB libspdk_bdev.a 00:04:23.035 SO libspdk_bdev.so.14.0 00:04:23.294 SYMLINK libspdk_bdev.so 00:04:23.294 LIB libspdk_blobfs.a 00:04:23.294 LIB libspdk_lvol.a 00:04:23.294 CC lib/scsi/dev.o 00:04:23.294 CC lib/scsi/lun.o 00:04:23.294 CC lib/scsi/port.o 00:04:23.294 CC lib/scsi/scsi.o 00:04:23.294 CC lib/nbd/nbd.o 00:04:23.294 CC lib/ublk/ublk.o 00:04:23.294 CC lib/ftl/ftl_core.o 00:04:23.294 CC lib/nvmf/ctrlr.o 00:04:23.294 SO libspdk_blobfs.so.9.0 00:04:23.294 SO libspdk_lvol.so.9.1 00:04:23.294 SYMLINK libspdk_blobfs.so 00:04:23.294 SYMLINK libspdk_lvol.so 00:04:23.294 CC lib/nvmf/ctrlr_discovery.o 00:04:23.294 CC lib/ftl/ftl_init.o 00:04:23.552 CC lib/ftl/ftl_layout.o 00:04:23.552 CC lib/ftl/ftl_debug.o 00:04:23.552 CC lib/ftl/ftl_io.o 00:04:23.552 CC lib/scsi/scsi_bdev.o 00:04:23.552 CC lib/scsi/scsi_pr.o 00:04:23.811 CC lib/ublk/ublk_rpc.o 00:04:23.811 CC lib/ftl/ftl_sb.o 00:04:23.811 CC lib/nbd/nbd_rpc.o 00:04:23.811 CC lib/ftl/ftl_l2p.o 00:04:23.811 CC lib/ftl/ftl_l2p_flat.o 00:04:23.811 CC lib/nvmf/ctrlr_bdev.o 00:04:24.069 CC lib/scsi/scsi_rpc.o 00:04:24.069 CC lib/ftl/ftl_nv_cache.o 00:04:24.069 CC lib/nvmf/subsystem.o 00:04:24.069 LIB libspdk_nbd.a 00:04:24.069 CC lib/scsi/task.o 00:04:24.069 SO libspdk_nbd.so.6.0 00:04:24.069 LIB libspdk_ublk.a 00:04:24.069 CC lib/ftl/ftl_band.o 00:04:24.069 CC lib/ftl/ftl_band_ops.o 00:04:24.069 SO libspdk_ublk.so.2.0 00:04:24.069 SYMLINK libspdk_nbd.so 00:04:24.069 CC lib/nvmf/nvmf.o 00:04:24.069 CC lib/nvmf/nvmf_rpc.o 00:04:24.069 SYMLINK libspdk_ublk.so 00:04:24.069 CC lib/nvmf/transport.o 00:04:24.328 LIB libspdk_scsi.a 00:04:24.328 SO libspdk_scsi.so.8.0 00:04:24.329 CC lib/ftl/ftl_writer.o 00:04:24.329 SYMLINK libspdk_scsi.so 00:04:24.329 CC lib/ftl/ftl_rq.o 00:04:24.587 CC lib/ftl/ftl_reloc.o 00:04:24.587 CC lib/ftl/ftl_l2p_cache.o 00:04:24.587 CC lib/nvmf/tcp.o 00:04:24.587 CC lib/nvmf/rdma.o 00:04:24.846 CC lib/ftl/ftl_p2l.o 00:04:24.846 CC lib/ftl/mngt/ftl_mngt.o 00:04:24.846 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:25.105 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:25.105 CC lib/iscsi/conn.o 00:04:25.105 CC lib/vhost/vhost.o 00:04:25.105 CC lib/vhost/vhost_rpc.o 00:04:25.105 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:25.105 CC lib/iscsi/init_grp.o 00:04:25.105 CC lib/vhost/vhost_scsi.o 00:04:25.105 CC lib/vhost/vhost_blk.o 00:04:25.364 CC lib/iscsi/iscsi.o 00:04:25.364 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:25.364 CC lib/iscsi/md5.o 00:04:25.623 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:25.623 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:25.623 CC lib/iscsi/param.o 00:04:25.883 CC lib/vhost/rte_vhost_user.o 00:04:25.883 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:25.883 CC lib/iscsi/portal_grp.o 00:04:25.883 CC lib/iscsi/tgt_node.o 00:04:25.883 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:26.141 CC lib/iscsi/iscsi_subsystem.o 00:04:26.141 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:26.141 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:26.141 CC lib/iscsi/iscsi_rpc.o 00:04:26.141 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:26.400 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:26.400 CC lib/iscsi/task.o 00:04:26.400 CC lib/ftl/utils/ftl_conf.o 00:04:26.400 CC lib/ftl/utils/ftl_md.o 00:04:26.400 CC lib/ftl/utils/ftl_mempool.o 00:04:26.400 CC lib/ftl/utils/ftl_bitmap.o 00:04:26.400 CC lib/ftl/utils/ftl_property.o 00:04:26.400 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:26.659 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:26.659 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:26.659 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:26.659 LIB libspdk_nvmf.a 00:04:26.659 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:26.659 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:26.659 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:26.659 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:26.659 SO libspdk_nvmf.so.17.0 00:04:26.659 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:26.918 LIB libspdk_iscsi.a 00:04:26.918 LIB libspdk_vhost.a 00:04:26.918 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:26.918 CC lib/ftl/base/ftl_base_dev.o 00:04:26.918 CC lib/ftl/base/ftl_base_bdev.o 00:04:26.918 SO libspdk_iscsi.so.7.0 00:04:26.918 SO libspdk_vhost.so.7.1 00:04:26.918 SYMLINK libspdk_nvmf.so 00:04:26.918 CC lib/ftl/ftl_trace.o 00:04:26.918 SYMLINK libspdk_vhost.so 00:04:27.176 SYMLINK libspdk_iscsi.so 00:04:27.176 LIB libspdk_ftl.a 00:04:27.434 SO libspdk_ftl.so.8.0 00:04:27.692 SYMLINK libspdk_ftl.so 00:04:27.950 CC module/env_dpdk/env_dpdk_rpc.o 00:04:27.950 CC module/accel/error/accel_error.o 00:04:27.950 CC module/accel/dsa/accel_dsa.o 00:04:27.950 CC module/accel/ioat/accel_ioat.o 00:04:27.950 CC module/blob/bdev/blob_bdev.o 00:04:27.950 CC module/accel/iaa/accel_iaa.o 00:04:27.950 CC module/scheduler/gscheduler/gscheduler.o 00:04:27.950 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:27.950 CC module/sock/posix/posix.o 00:04:27.950 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:28.208 LIB libspdk_env_dpdk_rpc.a 00:04:28.208 SO libspdk_env_dpdk_rpc.so.5.0 00:04:28.208 LIB libspdk_scheduler_gscheduler.a 00:04:28.208 SYMLINK libspdk_env_dpdk_rpc.so 00:04:28.208 CC module/accel/dsa/accel_dsa_rpc.o 00:04:28.208 LIB libspdk_scheduler_dpdk_governor.a 00:04:28.208 SO libspdk_scheduler_gscheduler.so.3.0 00:04:28.208 CC module/accel/error/accel_error_rpc.o 00:04:28.208 LIB libspdk_scheduler_dynamic.a 00:04:28.208 CC module/accel/ioat/accel_ioat_rpc.o 00:04:28.208 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:28.208 CC module/accel/iaa/accel_iaa_rpc.o 00:04:28.209 SO libspdk_scheduler_dynamic.so.3.0 00:04:28.209 SYMLINK libspdk_scheduler_gscheduler.so 00:04:28.209 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:28.209 LIB libspdk_blob_bdev.a 00:04:28.466 SYMLINK libspdk_scheduler_dynamic.so 00:04:28.466 SO libspdk_blob_bdev.so.10.1 00:04:28.466 LIB libspdk_accel_dsa.a 00:04:28.466 LIB libspdk_accel_error.a 00:04:28.466 SO libspdk_accel_dsa.so.4.0 00:04:28.466 LIB libspdk_accel_iaa.a 00:04:28.466 LIB libspdk_accel_ioat.a 00:04:28.466 SYMLINK libspdk_blob_bdev.so 00:04:28.466 SO libspdk_accel_error.so.1.0 00:04:28.466 SO libspdk_accel_ioat.so.5.0 00:04:28.466 SO libspdk_accel_iaa.so.2.0 00:04:28.466 SYMLINK libspdk_accel_dsa.so 00:04:28.466 SYMLINK libspdk_accel_error.so 00:04:28.466 SYMLINK libspdk_accel_ioat.so 00:04:28.466 SYMLINK libspdk_accel_iaa.so 00:04:28.724 CC module/blobfs/bdev/blobfs_bdev.o 00:04:28.724 CC module/bdev/gpt/gpt.o 00:04:28.724 CC module/bdev/delay/vbdev_delay.o 00:04:28.724 CC module/bdev/malloc/bdev_malloc.o 00:04:28.724 CC module/bdev/error/vbdev_error.o 00:04:28.724 CC module/bdev/lvol/vbdev_lvol.o 00:04:28.724 CC module/bdev/nvme/bdev_nvme.o 00:04:28.724 CC module/bdev/null/bdev_null.o 00:04:28.724 CC module/bdev/passthru/vbdev_passthru.o 00:04:28.724 LIB libspdk_sock_posix.a 00:04:28.724 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:28.724 SO libspdk_sock_posix.so.5.0 00:04:28.724 CC module/bdev/gpt/vbdev_gpt.o 00:04:28.983 CC module/bdev/error/vbdev_error_rpc.o 00:04:28.983 SYMLINK libspdk_sock_posix.so 00:04:28.983 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:28.983 CC module/bdev/null/bdev_null_rpc.o 00:04:28.983 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:28.983 LIB libspdk_blobfs_bdev.a 00:04:28.983 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:28.983 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:28.983 SO libspdk_blobfs_bdev.so.5.0 00:04:28.983 LIB libspdk_bdev_error.a 00:04:28.983 LIB libspdk_bdev_gpt.a 00:04:29.241 LIB libspdk_bdev_passthru.a 00:04:29.241 SYMLINK libspdk_blobfs_bdev.so 00:04:29.241 SO libspdk_bdev_error.so.5.0 00:04:29.241 LIB libspdk_bdev_null.a 00:04:29.241 SO libspdk_bdev_gpt.so.5.0 00:04:29.241 SO libspdk_bdev_passthru.so.5.0 00:04:29.241 SO libspdk_bdev_null.so.5.0 00:04:29.241 LIB libspdk_bdev_delay.a 00:04:29.241 LIB libspdk_bdev_malloc.a 00:04:29.241 SYMLINK libspdk_bdev_error.so 00:04:29.241 SO libspdk_bdev_delay.so.5.0 00:04:29.241 SYMLINK libspdk_bdev_passthru.so 00:04:29.241 SO libspdk_bdev_malloc.so.5.0 00:04:29.241 SYMLINK libspdk_bdev_gpt.so 00:04:29.241 CC module/bdev/raid/bdev_raid.o 00:04:29.241 SYMLINK libspdk_bdev_null.so 00:04:29.241 CC module/bdev/raid/bdev_raid_rpc.o 00:04:29.241 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:29.241 CC module/bdev/split/vbdev_split.o 00:04:29.241 SYMLINK libspdk_bdev_delay.so 00:04:29.241 CC module/bdev/split/vbdev_split_rpc.o 00:04:29.241 LIB libspdk_bdev_lvol.a 00:04:29.241 SYMLINK libspdk_bdev_malloc.so 00:04:29.241 SO libspdk_bdev_lvol.so.5.0 00:04:29.241 CC module/bdev/raid/bdev_raid_sb.o 00:04:29.241 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:29.499 CC module/bdev/aio/bdev_aio.o 00:04:29.499 SYMLINK libspdk_bdev_lvol.so 00:04:29.499 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:29.499 CC module/bdev/nvme/nvme_rpc.o 00:04:29.499 CC module/bdev/nvme/bdev_mdns_client.o 00:04:29.499 LIB libspdk_bdev_split.a 00:04:29.499 SO libspdk_bdev_split.so.5.0 00:04:29.499 CC module/bdev/nvme/vbdev_opal.o 00:04:29.499 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:29.499 SYMLINK libspdk_bdev_split.so 00:04:29.811 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:29.811 CC module/bdev/aio/bdev_aio_rpc.o 00:04:29.811 LIB libspdk_bdev_zone_block.a 00:04:29.811 SO libspdk_bdev_zone_block.so.5.0 00:04:29.811 CC module/bdev/ftl/bdev_ftl.o 00:04:29.811 CC module/bdev/raid/raid0.o 00:04:29.811 SYMLINK libspdk_bdev_zone_block.so 00:04:29.811 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:29.811 CC module/bdev/raid/raid1.o 00:04:29.811 LIB libspdk_bdev_aio.a 00:04:29.811 CC module/bdev/raid/concat.o 00:04:29.811 CC module/bdev/iscsi/bdev_iscsi.o 00:04:29.811 SO libspdk_bdev_aio.so.5.0 00:04:29.811 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:30.069 SYMLINK libspdk_bdev_aio.so 00:04:30.069 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:30.069 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:30.069 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:30.069 LIB libspdk_bdev_raid.a 00:04:30.069 LIB libspdk_bdev_ftl.a 00:04:30.069 SO libspdk_bdev_ftl.so.5.0 00:04:30.069 SO libspdk_bdev_raid.so.5.0 00:04:30.327 SYMLINK libspdk_bdev_ftl.so 00:04:30.327 SYMLINK libspdk_bdev_raid.so 00:04:30.327 LIB libspdk_bdev_iscsi.a 00:04:30.327 SO libspdk_bdev_iscsi.so.5.0 00:04:30.328 SYMLINK libspdk_bdev_iscsi.so 00:04:30.328 LIB libspdk_bdev_virtio.a 00:04:30.585 SO libspdk_bdev_virtio.so.5.0 00:04:30.585 SYMLINK libspdk_bdev_virtio.so 00:04:30.843 LIB libspdk_bdev_nvme.a 00:04:30.843 SO libspdk_bdev_nvme.so.6.0 00:04:31.101 SYMLINK libspdk_bdev_nvme.so 00:04:31.359 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:31.359 CC module/event/subsystems/vmd/vmd.o 00:04:31.359 CC module/event/subsystems/scheduler/scheduler.o 00:04:31.359 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:31.359 CC module/event/subsystems/sock/sock.o 00:04:31.359 CC module/event/subsystems/iobuf/iobuf.o 00:04:31.359 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:31.617 LIB libspdk_event_sock.a 00:04:31.617 LIB libspdk_event_vhost_blk.a 00:04:31.617 LIB libspdk_event_vmd.a 00:04:31.617 LIB libspdk_event_scheduler.a 00:04:31.617 SO libspdk_event_sock.so.4.0 00:04:31.617 SO libspdk_event_scheduler.so.3.0 00:04:31.617 SO libspdk_event_vhost_blk.so.2.0 00:04:31.617 SO libspdk_event_vmd.so.5.0 00:04:31.617 LIB libspdk_event_iobuf.a 00:04:31.617 SYMLINK libspdk_event_scheduler.so 00:04:31.617 SYMLINK libspdk_event_sock.so 00:04:31.617 SYMLINK libspdk_event_vhost_blk.so 00:04:31.617 SO libspdk_event_iobuf.so.2.0 00:04:31.617 SYMLINK libspdk_event_vmd.so 00:04:31.875 SYMLINK libspdk_event_iobuf.so 00:04:31.875 CC module/event/subsystems/accel/accel.o 00:04:32.134 LIB libspdk_event_accel.a 00:04:32.134 SO libspdk_event_accel.so.5.0 00:04:32.134 SYMLINK libspdk_event_accel.so 00:04:32.392 CC module/event/subsystems/bdev/bdev.o 00:04:32.651 LIB libspdk_event_bdev.a 00:04:32.651 SO libspdk_event_bdev.so.5.0 00:04:32.651 SYMLINK libspdk_event_bdev.so 00:04:32.909 CC module/event/subsystems/nbd/nbd.o 00:04:32.909 CC module/event/subsystems/ublk/ublk.o 00:04:32.909 CC module/event/subsystems/scsi/scsi.o 00:04:32.909 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:32.909 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:33.168 LIB libspdk_event_nbd.a 00:04:33.168 LIB libspdk_event_ublk.a 00:04:33.168 LIB libspdk_event_scsi.a 00:04:33.168 SO libspdk_event_ublk.so.2.0 00:04:33.168 SO libspdk_event_nbd.so.5.0 00:04:33.168 SO libspdk_event_scsi.so.5.0 00:04:33.168 SYMLINK libspdk_event_ublk.so 00:04:33.168 SYMLINK libspdk_event_nbd.so 00:04:33.168 LIB libspdk_event_nvmf.a 00:04:33.168 SYMLINK libspdk_event_scsi.so 00:04:33.168 SO libspdk_event_nvmf.so.5.0 00:04:33.427 SYMLINK libspdk_event_nvmf.so 00:04:33.427 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:33.427 CC module/event/subsystems/iscsi/iscsi.o 00:04:33.686 LIB libspdk_event_vhost_scsi.a 00:04:33.686 LIB libspdk_event_iscsi.a 00:04:33.686 SO libspdk_event_vhost_scsi.so.2.0 00:04:33.686 SO libspdk_event_iscsi.so.5.0 00:04:33.686 SYMLINK libspdk_event_vhost_scsi.so 00:04:33.686 SYMLINK libspdk_event_iscsi.so 00:04:33.945 SO libspdk.so.5.0 00:04:33.945 SYMLINK libspdk.so 00:04:33.945 TEST_HEADER include/spdk/accel.h 00:04:33.945 TEST_HEADER include/spdk/accel_module.h 00:04:33.945 TEST_HEADER include/spdk/assert.h 00:04:33.945 CXX app/trace/trace.o 00:04:33.945 TEST_HEADER include/spdk/barrier.h 00:04:33.945 TEST_HEADER include/spdk/base64.h 00:04:33.945 TEST_HEADER include/spdk/bdev.h 00:04:33.945 TEST_HEADER include/spdk/bdev_module.h 00:04:33.945 TEST_HEADER include/spdk/bdev_zone.h 00:04:33.945 TEST_HEADER include/spdk/bit_array.h 00:04:33.945 TEST_HEADER include/spdk/bit_pool.h 00:04:33.945 TEST_HEADER include/spdk/blob_bdev.h 00:04:33.945 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:33.945 TEST_HEADER include/spdk/blobfs.h 00:04:33.945 TEST_HEADER include/spdk/blob.h 00:04:33.945 TEST_HEADER include/spdk/conf.h 00:04:33.945 TEST_HEADER include/spdk/config.h 00:04:33.945 TEST_HEADER include/spdk/cpuset.h 00:04:33.945 TEST_HEADER include/spdk/crc16.h 00:04:33.945 TEST_HEADER include/spdk/crc32.h 00:04:33.945 TEST_HEADER include/spdk/crc64.h 00:04:33.945 TEST_HEADER include/spdk/dif.h 00:04:33.945 TEST_HEADER include/spdk/dma.h 00:04:33.945 TEST_HEADER include/spdk/endian.h 00:04:33.945 TEST_HEADER include/spdk/env_dpdk.h 00:04:33.945 TEST_HEADER include/spdk/env.h 00:04:33.945 TEST_HEADER include/spdk/event.h 00:04:33.945 TEST_HEADER include/spdk/fd_group.h 00:04:33.945 TEST_HEADER include/spdk/fd.h 00:04:33.945 TEST_HEADER include/spdk/file.h 00:04:33.945 TEST_HEADER include/spdk/ftl.h 00:04:33.945 CC test/event/event_perf/event_perf.o 00:04:33.945 TEST_HEADER include/spdk/gpt_spec.h 00:04:34.203 TEST_HEADER include/spdk/hexlify.h 00:04:34.203 TEST_HEADER include/spdk/histogram_data.h 00:04:34.203 TEST_HEADER include/spdk/idxd.h 00:04:34.203 CC examples/accel/perf/accel_perf.o 00:04:34.203 TEST_HEADER include/spdk/idxd_spec.h 00:04:34.203 TEST_HEADER include/spdk/init.h 00:04:34.203 TEST_HEADER include/spdk/ioat.h 00:04:34.203 TEST_HEADER include/spdk/ioat_spec.h 00:04:34.203 TEST_HEADER include/spdk/iscsi_spec.h 00:04:34.203 TEST_HEADER include/spdk/json.h 00:04:34.203 TEST_HEADER include/spdk/jsonrpc.h 00:04:34.203 TEST_HEADER include/spdk/likely.h 00:04:34.203 CC test/bdev/bdevio/bdevio.o 00:04:34.203 TEST_HEADER include/spdk/log.h 00:04:34.203 TEST_HEADER include/spdk/lvol.h 00:04:34.203 CC test/app/bdev_svc/bdev_svc.o 00:04:34.203 CC test/blobfs/mkfs/mkfs.o 00:04:34.203 TEST_HEADER include/spdk/memory.h 00:04:34.203 TEST_HEADER include/spdk/mmio.h 00:04:34.203 CC test/accel/dif/dif.o 00:04:34.203 TEST_HEADER include/spdk/nbd.h 00:04:34.203 TEST_HEADER include/spdk/notify.h 00:04:34.203 TEST_HEADER include/spdk/nvme.h 00:04:34.203 TEST_HEADER include/spdk/nvme_intel.h 00:04:34.203 CC test/dma/test_dma/test_dma.o 00:04:34.203 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:34.203 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:34.203 TEST_HEADER include/spdk/nvme_spec.h 00:04:34.203 TEST_HEADER include/spdk/nvme_zns.h 00:04:34.203 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:34.204 CC test/env/mem_callbacks/mem_callbacks.o 00:04:34.204 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:34.204 TEST_HEADER include/spdk/nvmf.h 00:04:34.204 TEST_HEADER include/spdk/nvmf_spec.h 00:04:34.204 TEST_HEADER include/spdk/nvmf_transport.h 00:04:34.204 TEST_HEADER include/spdk/opal.h 00:04:34.204 TEST_HEADER include/spdk/opal_spec.h 00:04:34.204 TEST_HEADER include/spdk/pci_ids.h 00:04:34.204 TEST_HEADER include/spdk/pipe.h 00:04:34.204 TEST_HEADER include/spdk/queue.h 00:04:34.204 TEST_HEADER include/spdk/reduce.h 00:04:34.204 TEST_HEADER include/spdk/rpc.h 00:04:34.204 TEST_HEADER include/spdk/scheduler.h 00:04:34.204 TEST_HEADER include/spdk/scsi.h 00:04:34.204 TEST_HEADER include/spdk/scsi_spec.h 00:04:34.204 TEST_HEADER include/spdk/sock.h 00:04:34.204 TEST_HEADER include/spdk/stdinc.h 00:04:34.204 TEST_HEADER include/spdk/string.h 00:04:34.204 TEST_HEADER include/spdk/thread.h 00:04:34.204 TEST_HEADER include/spdk/trace.h 00:04:34.204 TEST_HEADER include/spdk/trace_parser.h 00:04:34.204 TEST_HEADER include/spdk/tree.h 00:04:34.204 TEST_HEADER include/spdk/ublk.h 00:04:34.204 TEST_HEADER include/spdk/util.h 00:04:34.204 TEST_HEADER include/spdk/uuid.h 00:04:34.204 TEST_HEADER include/spdk/version.h 00:04:34.204 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:34.204 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:34.204 TEST_HEADER include/spdk/vhost.h 00:04:34.204 TEST_HEADER include/spdk/vmd.h 00:04:34.204 TEST_HEADER include/spdk/xor.h 00:04:34.204 TEST_HEADER include/spdk/zipf.h 00:04:34.204 CXX test/cpp_headers/accel.o 00:04:34.204 LINK event_perf 00:04:34.463 LINK bdev_svc 00:04:34.463 LINK mkfs 00:04:34.463 LINK spdk_trace 00:04:34.463 CXX test/cpp_headers/accel_module.o 00:04:34.463 CC test/event/reactor/reactor.o 00:04:34.463 LINK bdevio 00:04:34.463 LINK dif 00:04:34.463 LINK accel_perf 00:04:34.463 LINK test_dma 00:04:34.721 CXX test/cpp_headers/assert.o 00:04:34.721 LINK reactor 00:04:34.721 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:34.721 CC test/lvol/esnap/esnap.o 00:04:34.721 CC app/trace_record/trace_record.o 00:04:34.721 CXX test/cpp_headers/barrier.o 00:04:34.721 LINK mem_callbacks 00:04:34.721 CC test/event/reactor_perf/reactor_perf.o 00:04:34.980 CC test/event/app_repeat/app_repeat.o 00:04:34.980 CC test/event/scheduler/scheduler.o 00:04:34.980 CC examples/bdev/hello_world/hello_bdev.o 00:04:34.980 CC test/nvme/aer/aer.o 00:04:34.980 CXX test/cpp_headers/base64.o 00:04:34.980 LINK reactor_perf 00:04:34.980 CC test/env/vtophys/vtophys.o 00:04:34.980 LINK spdk_trace_record 00:04:34.980 LINK app_repeat 00:04:35.238 LINK scheduler 00:04:35.238 LINK nvme_fuzz 00:04:35.238 CXX test/cpp_headers/bdev.o 00:04:35.238 LINK vtophys 00:04:35.238 LINK hello_bdev 00:04:35.238 CC test/app/histogram_perf/histogram_perf.o 00:04:35.238 LINK aer 00:04:35.238 CC app/nvmf_tgt/nvmf_main.o 00:04:35.238 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:35.496 CXX test/cpp_headers/bdev_module.o 00:04:35.496 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:35.496 LINK histogram_perf 00:04:35.496 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:35.496 CC test/env/memory/memory_ut.o 00:04:35.496 CC test/nvme/reset/reset.o 00:04:35.496 LINK nvmf_tgt 00:04:35.496 LINK env_dpdk_post_init 00:04:35.496 CC examples/bdev/bdevperf/bdevperf.o 00:04:35.496 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:35.496 CXX test/cpp_headers/bdev_zone.o 00:04:35.496 CC test/nvme/sgl/sgl.o 00:04:35.754 CC test/rpc_client/rpc_client_test.o 00:04:35.754 LINK reset 00:04:35.754 CXX test/cpp_headers/bit_array.o 00:04:35.754 CC app/iscsi_tgt/iscsi_tgt.o 00:04:36.011 LINK sgl 00:04:36.011 LINK rpc_client_test 00:04:36.011 CXX test/cpp_headers/bit_pool.o 00:04:36.011 LINK vhost_fuzz 00:04:36.011 CXX test/cpp_headers/blob_bdev.o 00:04:36.011 LINK iscsi_tgt 00:04:36.011 CC test/thread/poller_perf/poller_perf.o 00:04:36.270 CC test/nvme/e2edp/nvme_dp.o 00:04:36.270 CC app/spdk_lspci/spdk_lspci.o 00:04:36.270 CC app/spdk_tgt/spdk_tgt.o 00:04:36.270 CXX test/cpp_headers/blobfs_bdev.o 00:04:36.270 LINK bdevperf 00:04:36.270 CXX test/cpp_headers/blobfs.o 00:04:36.270 LINK poller_perf 00:04:36.270 LINK spdk_lspci 00:04:36.270 LINK memory_ut 00:04:36.529 LINK nvme_dp 00:04:36.529 CXX test/cpp_headers/blob.o 00:04:36.529 LINK spdk_tgt 00:04:36.529 CC app/spdk_nvme_perf/perf.o 00:04:36.529 CC app/spdk_nvme_discover/discovery_aer.o 00:04:36.529 CC app/spdk_nvme_identify/identify.o 00:04:36.529 CXX test/cpp_headers/conf.o 00:04:36.529 CC test/env/pci/pci_ut.o 00:04:36.529 CC examples/blob/hello_world/hello_blob.o 00:04:36.787 CC test/nvme/overhead/overhead.o 00:04:36.787 LINK spdk_nvme_discover 00:04:36.787 CC examples/blob/cli/blobcli.o 00:04:36.787 CXX test/cpp_headers/config.o 00:04:36.787 CXX test/cpp_headers/cpuset.o 00:04:36.787 LINK hello_blob 00:04:37.046 CC app/spdk_top/spdk_top.o 00:04:37.046 LINK iscsi_fuzz 00:04:37.046 LINK overhead 00:04:37.046 CXX test/cpp_headers/crc16.o 00:04:37.046 LINK pci_ut 00:04:37.046 CXX test/cpp_headers/crc32.o 00:04:37.305 CC test/nvme/err_injection/err_injection.o 00:04:37.305 LINK blobcli 00:04:37.305 CC test/app/jsoncat/jsoncat.o 00:04:37.305 LINK spdk_nvme_perf 00:04:37.305 CXX test/cpp_headers/crc64.o 00:04:37.305 LINK spdk_nvme_identify 00:04:37.305 CC examples/ioat/perf/perf.o 00:04:37.305 LINK err_injection 00:04:37.305 LINK jsoncat 00:04:37.563 CC examples/nvme/hello_world/hello_world.o 00:04:37.563 CXX test/cpp_headers/dif.o 00:04:37.563 CC examples/sock/hello_world/hello_sock.o 00:04:37.563 LINK ioat_perf 00:04:37.563 CC test/nvme/startup/startup.o 00:04:37.563 CC examples/vmd/lsvmd/lsvmd.o 00:04:37.563 CC test/app/stub/stub.o 00:04:37.563 CC examples/nvmf/nvmf/nvmf.o 00:04:37.563 LINK hello_world 00:04:37.563 CXX test/cpp_headers/dma.o 00:04:37.832 LINK lsvmd 00:04:37.832 LINK startup 00:04:37.832 CC examples/ioat/verify/verify.o 00:04:37.832 LINK hello_sock 00:04:37.832 LINK stub 00:04:37.832 CXX test/cpp_headers/endian.o 00:04:37.832 LINK spdk_top 00:04:37.832 CC examples/nvme/reconnect/reconnect.o 00:04:37.832 CXX test/cpp_headers/env_dpdk.o 00:04:38.110 LINK nvmf 00:04:38.110 CC examples/vmd/led/led.o 00:04:38.110 LINK verify 00:04:38.110 CC test/nvme/reserve/reserve.o 00:04:38.110 CC test/nvme/simple_copy/simple_copy.o 00:04:38.110 CXX test/cpp_headers/env.o 00:04:38.110 CC app/vhost/vhost.o 00:04:38.110 CC app/spdk_dd/spdk_dd.o 00:04:38.110 LINK led 00:04:38.369 LINK reconnect 00:04:38.369 LINK reserve 00:04:38.369 LINK simple_copy 00:04:38.369 CC app/fio/nvme/fio_plugin.o 00:04:38.369 CXX test/cpp_headers/event.o 00:04:38.369 LINK vhost 00:04:38.369 CC examples/util/zipf/zipf.o 00:04:38.369 CXX test/cpp_headers/fd_group.o 00:04:38.369 CXX test/cpp_headers/fd.o 00:04:38.369 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:38.369 LINK zipf 00:04:38.628 CC test/nvme/connect_stress/connect_stress.o 00:04:38.628 LINK spdk_dd 00:04:38.628 CXX test/cpp_headers/file.o 00:04:38.628 CC examples/idxd/perf/perf.o 00:04:38.628 CC examples/thread/thread/thread_ex.o 00:04:38.628 CXX test/cpp_headers/ftl.o 00:04:38.628 LINK connect_stress 00:04:38.628 CXX test/cpp_headers/gpt_spec.o 00:04:38.887 LINK spdk_nvme 00:04:38.887 CC app/fio/bdev/fio_plugin.o 00:04:38.887 LINK thread 00:04:38.887 CXX test/cpp_headers/hexlify.o 00:04:38.887 CC examples/nvme/arbitration/arbitration.o 00:04:38.887 LINK nvme_manage 00:04:38.887 CC test/nvme/boot_partition/boot_partition.o 00:04:38.887 LINK idxd_perf 00:04:38.887 CC examples/nvme/hotplug/hotplug.o 00:04:39.146 CXX test/cpp_headers/histogram_data.o 00:04:39.146 LINK boot_partition 00:04:39.146 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:39.146 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:39.146 CC test/nvme/compliance/nvme_compliance.o 00:04:39.146 LINK hotplug 00:04:39.146 CXX test/cpp_headers/idxd.o 00:04:39.146 LINK arbitration 00:04:39.404 LINK esnap 00:04:39.404 CC test/nvme/fused_ordering/fused_ordering.o 00:04:39.404 LINK spdk_bdev 00:04:39.404 LINK interrupt_tgt 00:04:39.404 LINK cmb_copy 00:04:39.404 CC examples/nvme/abort/abort.o 00:04:39.404 CXX test/cpp_headers/idxd_spec.o 00:04:39.404 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:39.404 LINK nvme_compliance 00:04:39.662 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:39.662 LINK fused_ordering 00:04:39.662 CXX test/cpp_headers/init.o 00:04:39.662 CXX test/cpp_headers/ioat.o 00:04:39.662 CXX test/cpp_headers/ioat_spec.o 00:04:39.662 LINK pmr_persistence 00:04:39.921 CXX test/cpp_headers/iscsi_spec.o 00:04:39.921 CC test/nvme/fdp/fdp.o 00:04:39.921 CXX test/cpp_headers/json.o 00:04:39.921 LINK doorbell_aers 00:04:39.921 CC test/nvme/cuse/cuse.o 00:04:39.921 CXX test/cpp_headers/jsonrpc.o 00:04:39.921 LINK abort 00:04:39.921 CXX test/cpp_headers/likely.o 00:04:39.921 CXX test/cpp_headers/log.o 00:04:39.921 CXX test/cpp_headers/lvol.o 00:04:40.180 CXX test/cpp_headers/memory.o 00:04:40.180 CXX test/cpp_headers/mmio.o 00:04:40.180 CXX test/cpp_headers/nbd.o 00:04:40.180 CXX test/cpp_headers/notify.o 00:04:40.180 CXX test/cpp_headers/nvme.o 00:04:40.180 CXX test/cpp_headers/nvme_intel.o 00:04:40.180 CXX test/cpp_headers/nvme_ocssd.o 00:04:40.180 LINK fdp 00:04:40.180 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:40.180 CXX test/cpp_headers/nvme_spec.o 00:04:40.180 CXX test/cpp_headers/nvme_zns.o 00:04:40.461 CXX test/cpp_headers/nvmf_cmd.o 00:04:40.461 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:40.461 CXX test/cpp_headers/nvmf.o 00:04:40.461 CXX test/cpp_headers/nvmf_spec.o 00:04:40.461 CXX test/cpp_headers/nvmf_transport.o 00:04:40.461 CXX test/cpp_headers/opal.o 00:04:40.461 CXX test/cpp_headers/opal_spec.o 00:04:40.461 CXX test/cpp_headers/pci_ids.o 00:04:40.461 CXX test/cpp_headers/pipe.o 00:04:40.461 CXX test/cpp_headers/queue.o 00:04:40.461 CXX test/cpp_headers/reduce.o 00:04:40.721 CXX test/cpp_headers/rpc.o 00:04:40.721 CXX test/cpp_headers/scheduler.o 00:04:40.721 CXX test/cpp_headers/scsi.o 00:04:40.721 CXX test/cpp_headers/scsi_spec.o 00:04:40.721 CXX test/cpp_headers/sock.o 00:04:40.721 CXX test/cpp_headers/stdinc.o 00:04:40.721 CXX test/cpp_headers/string.o 00:04:40.721 CXX test/cpp_headers/thread.o 00:04:40.721 CXX test/cpp_headers/trace.o 00:04:40.979 CXX test/cpp_headers/trace_parser.o 00:04:40.979 CXX test/cpp_headers/tree.o 00:04:40.979 CXX test/cpp_headers/ublk.o 00:04:40.979 CXX test/cpp_headers/util.o 00:04:40.979 CXX test/cpp_headers/uuid.o 00:04:40.979 CXX test/cpp_headers/version.o 00:04:40.979 CXX test/cpp_headers/vfio_user_pci.o 00:04:40.979 CXX test/cpp_headers/vfio_user_spec.o 00:04:40.979 CXX test/cpp_headers/vhost.o 00:04:40.979 CXX test/cpp_headers/vmd.o 00:04:40.979 CXX test/cpp_headers/xor.o 00:04:40.979 LINK cuse 00:04:40.979 CXX test/cpp_headers/zipf.o 00:04:46.247 00:04:46.247 real 0m56.775s 00:04:46.247 user 5m15.805s 00:04:46.247 sys 1m7.260s 00:04:46.247 00:13:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:46.247 ************************************ 00:04:46.247 END TEST make 00:04:46.247 ************************************ 00:04:46.247 00:13:32 -- common/autotest_common.sh@10 -- $ set +x 00:04:46.247 00:13:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:46.247 00:13:32 -- nvmf/common.sh@7 -- # uname -s 00:04:46.247 00:13:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.247 00:13:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.247 00:13:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.247 00:13:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.247 00:13:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.247 00:13:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.247 00:13:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.247 00:13:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.247 00:13:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.247 00:13:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.247 00:13:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:04:46.247 00:13:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:04:46.247 00:13:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.247 00:13:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.247 00:13:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:46.247 00:13:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.247 00:13:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.247 00:13:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.247 00:13:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.247 00:13:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.248 00:13:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.248 00:13:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.248 00:13:32 -- paths/export.sh@5 -- # export PATH 00:04:46.248 00:13:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.248 00:13:32 -- nvmf/common.sh@46 -- # : 0 00:04:46.248 00:13:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:46.248 00:13:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:46.248 00:13:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:46.248 00:13:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.248 00:13:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.248 00:13:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:46.248 00:13:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:46.248 00:13:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:46.248 00:13:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:46.248 00:13:32 -- spdk/autotest.sh@32 -- # uname -s 00:04:46.248 00:13:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:46.248 00:13:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:46.248 00:13:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:46.248 00:13:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:46.248 00:13:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:46.248 00:13:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:46.248 00:13:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:46.248 00:13:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:46.248 00:13:32 -- spdk/autotest.sh@48 -- # udevadm_pid=61724 00:04:46.248 00:13:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:46.248 00:13:32 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:46.248 00:13:32 -- spdk/autotest.sh@54 -- # echo 61731 00:04:46.248 00:13:32 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:46.248 00:13:32 -- spdk/autotest.sh@56 -- # echo 61732 00:04:46.248 00:13:32 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:46.248 00:13:32 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:46.248 00:13:32 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:46.248 00:13:32 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:46.248 00:13:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:46.248 00:13:32 -- common/autotest_common.sh@10 -- # set +x 00:04:46.248 00:13:32 -- spdk/autotest.sh@70 -- # create_test_list 00:04:46.248 00:13:32 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:46.248 00:13:32 -- common/autotest_common.sh@10 -- # set +x 00:04:46.248 00:13:32 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:46.248 00:13:32 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:46.248 00:13:32 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:46.248 00:13:32 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:46.248 00:13:32 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:46.248 00:13:32 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:46.248 00:13:32 -- common/autotest_common.sh@1440 -- # uname 00:04:46.248 00:13:32 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:46.248 00:13:32 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:46.248 00:13:32 -- common/autotest_common.sh@1460 -- # uname 00:04:46.248 00:13:32 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:46.248 00:13:32 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:46.248 00:13:32 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:46.248 00:13:32 -- spdk/autotest.sh@83 -- # hash lcov 00:04:46.248 00:13:32 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:46.248 00:13:32 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:46.248 --rc lcov_branch_coverage=1 00:04:46.248 --rc lcov_function_coverage=1 00:04:46.248 --rc genhtml_branch_coverage=1 00:04:46.248 --rc genhtml_function_coverage=1 00:04:46.248 --rc genhtml_legend=1 00:04:46.248 --rc geninfo_all_blocks=1 00:04:46.248 ' 00:04:46.248 00:13:32 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:46.248 --rc lcov_branch_coverage=1 00:04:46.248 --rc lcov_function_coverage=1 00:04:46.248 --rc genhtml_branch_coverage=1 00:04:46.248 --rc genhtml_function_coverage=1 00:04:46.248 --rc genhtml_legend=1 00:04:46.248 --rc geninfo_all_blocks=1 00:04:46.248 ' 00:04:46.248 00:13:32 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:46.248 --rc lcov_branch_coverage=1 00:04:46.248 --rc lcov_function_coverage=1 00:04:46.248 --rc genhtml_branch_coverage=1 00:04:46.248 --rc genhtml_function_coverage=1 00:04:46.248 --rc genhtml_legend=1 00:04:46.248 --rc geninfo_all_blocks=1 00:04:46.248 --no-external' 00:04:46.248 00:13:32 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:46.248 --rc lcov_branch_coverage=1 00:04:46.248 --rc lcov_function_coverage=1 00:04:46.248 --rc genhtml_branch_coverage=1 00:04:46.248 --rc genhtml_function_coverage=1 00:04:46.248 --rc genhtml_legend=1 00:04:46.248 --rc geninfo_all_blocks=1 00:04:46.248 --no-external' 00:04:46.248 00:13:32 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:46.248 lcov: LCOV version 1.14 00:04:46.248 00:13:32 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:54.359 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:54.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:54.359 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:54.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:54.359 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:54.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:12.459 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:12.459 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:12.460 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:12.460 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:12.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:12.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:12.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:12.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:12.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:12.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:12.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:12.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:12.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:12.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:12.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:12.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:12.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:12.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:12.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:12.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:12.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:12.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:15.744 00:14:02 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:15.745 00:14:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:15.745 00:14:02 -- common/autotest_common.sh@10 -- # set +x 00:05:15.745 00:14:02 -- spdk/autotest.sh@102 -- # rm -f 00:05:15.745 00:14:02 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.398 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:16.398 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:16.398 00:14:03 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:16.398 00:14:03 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:16.398 00:14:03 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:16.398 00:14:03 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:16.398 00:14:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:16.398 00:14:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:16.398 00:14:03 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:16.398 00:14:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:16.398 00:14:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:16.398 00:14:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:16.398 00:14:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:16.398 00:14:03 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:16.398 00:14:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:16.398 00:14:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:16.398 00:14:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:16.398 00:14:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:16.398 00:14:03 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:16.398 00:14:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:16.398 00:14:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:16.398 00:14:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:16.398 00:14:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:16.398 00:14:03 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:16.398 00:14:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:16.398 00:14:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:16.398 00:14:03 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:16.398 00:14:03 -- spdk/autotest.sh@121 -- # grep -v p 00:05:16.398 00:14:03 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:16.398 00:14:03 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:16.398 00:14:03 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:16.398 00:14:03 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:16.398 00:14:03 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:16.398 00:14:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:16.398 No valid GPT data, bailing 00:05:16.398 00:14:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:16.398 00:14:03 -- scripts/common.sh@393 -- # pt= 00:05:16.398 00:14:03 -- scripts/common.sh@394 -- # return 1 00:05:16.398 00:14:03 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:16.398 1+0 records in 00:05:16.398 1+0 records out 00:05:16.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444006 s, 236 MB/s 00:05:16.398 00:14:03 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:16.398 00:14:03 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:16.398 00:14:03 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:05:16.398 00:14:03 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:16.398 00:14:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:16.398 No valid GPT data, bailing 00:05:16.398 00:14:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:16.398 00:14:03 -- scripts/common.sh@393 -- # pt= 00:05:16.398 00:14:03 -- scripts/common.sh@394 -- # return 1 00:05:16.398 00:14:03 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:16.398 1+0 records in 00:05:16.398 1+0 records out 00:05:16.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466096 s, 225 MB/s 00:05:16.398 00:14:03 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:16.398 00:14:03 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:16.398 00:14:03 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:05:16.398 00:14:03 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:16.398 00:14:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:16.663 No valid GPT data, bailing 00:05:16.663 00:14:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:16.663 00:14:03 -- scripts/common.sh@393 -- # pt= 00:05:16.663 00:14:03 -- scripts/common.sh@394 -- # return 1 00:05:16.663 00:14:03 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:16.663 1+0 records in 00:05:16.663 1+0 records out 00:05:16.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499996 s, 210 MB/s 00:05:16.663 00:14:03 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:16.663 00:14:03 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:16.663 00:14:03 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:05:16.663 00:14:03 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:16.663 00:14:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:16.663 No valid GPT data, bailing 00:05:16.663 00:14:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:16.663 00:14:03 -- scripts/common.sh@393 -- # pt= 00:05:16.663 00:14:03 -- scripts/common.sh@394 -- # return 1 00:05:16.663 00:14:03 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:16.663 1+0 records in 00:05:16.663 1+0 records out 00:05:16.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461643 s, 227 MB/s 00:05:16.663 00:14:03 -- spdk/autotest.sh@129 -- # sync 00:05:16.663 00:14:03 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:16.663 00:14:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:16.663 00:14:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:18.563 00:14:05 -- spdk/autotest.sh@135 -- # uname -s 00:05:18.563 00:14:05 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:18.563 00:14:05 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:18.563 00:14:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.563 00:14:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.563 00:14:05 -- common/autotest_common.sh@10 -- # set +x 00:05:18.563 ************************************ 00:05:18.563 START TEST setup.sh 00:05:18.563 ************************************ 00:05:18.563 00:14:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:18.563 * Looking for test storage... 00:05:18.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:18.563 00:14:05 -- setup/test-setup.sh@10 -- # uname -s 00:05:18.563 00:14:05 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:18.563 00:14:05 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:18.563 00:14:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.563 00:14:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.563 00:14:05 -- common/autotest_common.sh@10 -- # set +x 00:05:18.563 ************************************ 00:05:18.563 START TEST acl 00:05:18.563 ************************************ 00:05:18.563 00:14:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:18.822 * Looking for test storage... 00:05:18.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:18.822 00:14:05 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:18.822 00:14:05 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:18.822 00:14:05 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:18.822 00:14:05 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:18.822 00:14:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:18.822 00:14:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:18.822 00:14:05 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:18.822 00:14:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:18.822 00:14:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:18.822 00:14:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:18.822 00:14:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:18.822 00:14:05 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:18.822 00:14:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:18.822 00:14:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:18.822 00:14:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:18.822 00:14:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:18.822 00:14:05 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:18.822 00:14:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:18.822 00:14:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:18.822 00:14:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:18.822 00:14:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:18.822 00:14:05 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:18.822 00:14:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:18.822 00:14:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:18.822 00:14:05 -- setup/acl.sh@12 -- # devs=() 00:05:18.822 00:14:05 -- setup/acl.sh@12 -- # declare -a devs 00:05:18.822 00:14:05 -- setup/acl.sh@13 -- # drivers=() 00:05:18.822 00:14:05 -- setup/acl.sh@13 -- # declare -A drivers 00:05:18.822 00:14:05 -- setup/acl.sh@51 -- # setup reset 00:05:18.822 00:14:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.822 00:14:05 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.390 00:14:06 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:19.390 00:14:06 -- setup/acl.sh@16 -- # local dev driver 00:05:19.390 00:14:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.390 00:14:06 -- setup/acl.sh@15 -- # setup output status 00:05:19.390 00:14:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.390 00:14:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:19.650 Hugepages 00:05:19.650 node hugesize free / total 00:05:19.650 00:14:06 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:19.650 00:14:06 -- setup/acl.sh@19 -- # continue 00:05:19.650 00:14:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.650 00:05:19.650 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:19.650 00:14:06 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:19.650 00:14:06 -- setup/acl.sh@19 -- # continue 00:05:19.650 00:14:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.650 00:14:06 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:19.650 00:14:06 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:19.650 00:14:06 -- setup/acl.sh@20 -- # continue 00:05:19.650 00:14:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.650 00:14:06 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:19.650 00:14:06 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:19.650 00:14:06 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:19.650 00:14:06 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:19.650 00:14:06 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:19.650 00:14:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.919 00:14:06 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:19.919 00:14:06 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:19.919 00:14:06 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:19.919 00:14:06 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:19.919 00:14:06 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:19.919 00:14:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.919 00:14:06 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:19.919 00:14:06 -- setup/acl.sh@54 -- # run_test denied denied 00:05:19.919 00:14:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.919 00:14:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.919 00:14:06 -- common/autotest_common.sh@10 -- # set +x 00:05:19.919 ************************************ 00:05:19.919 START TEST denied 00:05:19.919 ************************************ 00:05:19.919 00:14:06 -- common/autotest_common.sh@1104 -- # denied 00:05:19.919 00:14:06 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:19.919 00:14:06 -- setup/acl.sh@38 -- # setup output config 00:05:19.919 00:14:06 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:19.919 00:14:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.919 00:14:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.856 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:20.856 00:14:07 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:20.856 00:14:07 -- setup/acl.sh@28 -- # local dev driver 00:05:20.856 00:14:07 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:20.856 00:14:07 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:20.856 00:14:07 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:20.856 00:14:07 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:20.856 00:14:07 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:20.856 00:14:07 -- setup/acl.sh@41 -- # setup reset 00:05:20.856 00:14:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.856 00:14:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.423 00:05:21.423 real 0m1.450s 00:05:21.423 user 0m0.576s 00:05:21.423 sys 0m0.828s 00:05:21.423 00:14:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.423 ************************************ 00:05:21.423 END TEST denied 00:05:21.423 ************************************ 00:05:21.423 00:14:08 -- common/autotest_common.sh@10 -- # set +x 00:05:21.423 00:14:08 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:21.423 00:14:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.423 00:14:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.423 00:14:08 -- common/autotest_common.sh@10 -- # set +x 00:05:21.423 ************************************ 00:05:21.423 START TEST allowed 00:05:21.423 ************************************ 00:05:21.423 00:14:08 -- common/autotest_common.sh@1104 -- # allowed 00:05:21.423 00:14:08 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:21.423 00:14:08 -- setup/acl.sh@45 -- # setup output config 00:05:21.423 00:14:08 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:21.423 00:14:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.423 00:14:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.990 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.990 00:14:09 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:21.991 00:14:09 -- setup/acl.sh@28 -- # local dev driver 00:05:21.991 00:14:09 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:21.991 00:14:09 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:21.991 00:14:09 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:21.991 00:14:09 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:21.991 00:14:09 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:21.991 00:14:09 -- setup/acl.sh@48 -- # setup reset 00:05:21.991 00:14:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.991 00:14:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.926 00:05:22.926 real 0m1.508s 00:05:22.926 user 0m0.672s 00:05:22.926 sys 0m0.828s 00:05:22.926 00:14:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.926 ************************************ 00:05:22.926 00:14:09 -- common/autotest_common.sh@10 -- # set +x 00:05:22.926 END TEST allowed 00:05:22.926 ************************************ 00:05:22.926 00:05:22.926 real 0m4.242s 00:05:22.926 user 0m1.797s 00:05:22.926 sys 0m2.410s 00:05:22.926 00:14:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.926 00:14:09 -- common/autotest_common.sh@10 -- # set +x 00:05:22.926 ************************************ 00:05:22.926 END TEST acl 00:05:22.926 ************************************ 00:05:22.926 00:14:10 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:22.926 00:14:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.926 00:14:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.926 00:14:10 -- common/autotest_common.sh@10 -- # set +x 00:05:22.926 ************************************ 00:05:22.926 START TEST hugepages 00:05:22.926 ************************************ 00:05:22.926 00:14:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:22.926 * Looking for test storage... 00:05:22.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:22.926 00:14:10 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:22.926 00:14:10 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:22.926 00:14:10 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:22.926 00:14:10 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:22.926 00:14:10 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:22.926 00:14:10 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:22.926 00:14:10 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:22.926 00:14:10 -- setup/common.sh@18 -- # local node= 00:05:22.926 00:14:10 -- setup/common.sh@19 -- # local var val 00:05:22.926 00:14:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.926 00:14:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.926 00:14:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.926 00:14:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.926 00:14:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.926 00:14:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4502588 kB' 'MemAvailable: 7403856 kB' 'Buffers: 2436 kB' 'Cached: 3102728 kB' 'SwapCached: 0 kB' 'Active: 475544 kB' 'Inactive: 2732448 kB' 'Active(anon): 113320 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 104480 kB' 'Mapped: 48756 kB' 'Shmem: 10492 kB' 'KReclaimable: 87520 kB' 'Slab: 167212 kB' 'SReclaimable: 87520 kB' 'SUnreclaim: 79692 kB' 'KernelStack: 6652 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 334452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.926 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.926 00:14:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # continue 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.927 00:14:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.927 00:14:10 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.927 00:14:10 -- setup/common.sh@33 -- # echo 2048 00:05:22.927 00:14:10 -- setup/common.sh@33 -- # return 0 00:05:22.927 00:14:10 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:22.927 00:14:10 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:22.927 00:14:10 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:22.927 00:14:10 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:22.927 00:14:10 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:22.927 00:14:10 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:22.927 00:14:10 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:22.927 00:14:10 -- setup/hugepages.sh@207 -- # get_nodes 00:05:22.927 00:14:10 -- setup/hugepages.sh@27 -- # local node 00:05:22.927 00:14:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.927 00:14:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:22.927 00:14:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.927 00:14:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.927 00:14:10 -- setup/hugepages.sh@208 -- # clear_hp 00:05:22.927 00:14:10 -- setup/hugepages.sh@37 -- # local node hp 00:05:22.927 00:14:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:22.927 00:14:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.927 00:14:10 -- setup/hugepages.sh@41 -- # echo 0 00:05:22.927 00:14:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.927 00:14:10 -- setup/hugepages.sh@41 -- # echo 0 00:05:22.927 00:14:10 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:22.927 00:14:10 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:22.927 00:14:10 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:22.927 00:14:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.927 00:14:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.927 00:14:10 -- common/autotest_common.sh@10 -- # set +x 00:05:23.186 ************************************ 00:05:23.186 START TEST default_setup 00:05:23.186 ************************************ 00:05:23.186 00:14:10 -- common/autotest_common.sh@1104 -- # default_setup 00:05:23.186 00:14:10 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:23.186 00:14:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:23.186 00:14:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:23.186 00:14:10 -- setup/hugepages.sh@51 -- # shift 00:05:23.186 00:14:10 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:23.186 00:14:10 -- setup/hugepages.sh@52 -- # local node_ids 00:05:23.186 00:14:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:23.186 00:14:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:23.186 00:14:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:23.186 00:14:10 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:23.186 00:14:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.186 00:14:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:23.186 00:14:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:23.186 00:14:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.186 00:14:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.186 00:14:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:23.186 00:14:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:23.186 00:14:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:23.186 00:14:10 -- setup/hugepages.sh@73 -- # return 0 00:05:23.186 00:14:10 -- setup/hugepages.sh@137 -- # setup output 00:05:23.186 00:14:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.186 00:14:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.754 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.754 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.015 00:14:10 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:24.015 00:14:11 -- setup/hugepages.sh@89 -- # local node 00:05:24.015 00:14:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.015 00:14:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.015 00:14:11 -- setup/hugepages.sh@92 -- # local surp 00:05:24.015 00:14:11 -- setup/hugepages.sh@93 -- # local resv 00:05:24.015 00:14:11 -- setup/hugepages.sh@94 -- # local anon 00:05:24.015 00:14:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.015 00:14:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.015 00:14:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.015 00:14:11 -- setup/common.sh@18 -- # local node= 00:05:24.015 00:14:11 -- setup/common.sh@19 -- # local var val 00:05:24.015 00:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.015 00:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.015 00:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.015 00:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.016 00:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.016 00:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6592056 kB' 'MemAvailable: 9493128 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491488 kB' 'Inactive: 2732452 kB' 'Active(anon): 129264 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120396 kB' 'Mapped: 48828 kB' 'Shmem: 10468 kB' 'KReclaimable: 87120 kB' 'Slab: 166856 kB' 'SReclaimable: 87120 kB' 'SUnreclaim: 79736 kB' 'KernelStack: 6576 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.016 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.016 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.017 00:14:11 -- setup/common.sh@33 -- # echo 0 00:05:24.017 00:14:11 -- setup/common.sh@33 -- # return 0 00:05:24.017 00:14:11 -- setup/hugepages.sh@97 -- # anon=0 00:05:24.017 00:14:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.017 00:14:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.017 00:14:11 -- setup/common.sh@18 -- # local node= 00:05:24.017 00:14:11 -- setup/common.sh@19 -- # local var val 00:05:24.017 00:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.017 00:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.017 00:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.017 00:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.017 00:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.017 00:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6591808 kB' 'MemAvailable: 9492872 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491528 kB' 'Inactive: 2732460 kB' 'Active(anon): 129304 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120452 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166744 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79660 kB' 'KernelStack: 6560 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.017 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.017 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.018 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.018 00:14:11 -- setup/common.sh@33 -- # echo 0 00:05:24.018 00:14:11 -- setup/common.sh@33 -- # return 0 00:05:24.018 00:14:11 -- setup/hugepages.sh@99 -- # surp=0 00:05:24.018 00:14:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.018 00:14:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.018 00:14:11 -- setup/common.sh@18 -- # local node= 00:05:24.018 00:14:11 -- setup/common.sh@19 -- # local var val 00:05:24.018 00:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.018 00:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.018 00:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.018 00:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.018 00:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.018 00:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.018 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.018 00:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6591808 kB' 'MemAvailable: 9492872 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491468 kB' 'Inactive: 2732460 kB' 'Active(anon): 129244 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120376 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166740 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79656 kB' 'KernelStack: 6560 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.019 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.019 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.020 00:14:11 -- setup/common.sh@33 -- # echo 0 00:05:24.020 00:14:11 -- setup/common.sh@33 -- # return 0 00:05:24.020 nr_hugepages=1024 00:05:24.020 resv_hugepages=0 00:05:24.020 surplus_hugepages=0 00:05:24.020 anon_hugepages=0 00:05:24.020 00:14:11 -- setup/hugepages.sh@100 -- # resv=0 00:05:24.020 00:14:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:24.020 00:14:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.020 00:14:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.020 00:14:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.020 00:14:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.020 00:14:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:24.020 00:14:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.020 00:14:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.020 00:14:11 -- setup/common.sh@18 -- # local node= 00:05:24.020 00:14:11 -- setup/common.sh@19 -- # local var val 00:05:24.020 00:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.020 00:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.020 00:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.020 00:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.020 00:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.020 00:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6591808 kB' 'MemAvailable: 9492872 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491536 kB' 'Inactive: 2732460 kB' 'Active(anon): 129312 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120444 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166740 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79656 kB' 'KernelStack: 6528 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.020 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.020 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.021 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.021 00:14:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.022 00:14:11 -- setup/common.sh@33 -- # echo 1024 00:05:24.022 00:14:11 -- setup/common.sh@33 -- # return 0 00:05:24.022 00:14:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.022 00:14:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.022 00:14:11 -- setup/hugepages.sh@27 -- # local node 00:05:24.022 00:14:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.022 00:14:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:24.022 00:14:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.022 00:14:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.022 00:14:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.022 00:14:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.022 00:14:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.022 00:14:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.022 00:14:11 -- setup/common.sh@18 -- # local node=0 00:05:24.022 00:14:11 -- setup/common.sh@19 -- # local var val 00:05:24.022 00:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.022 00:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.022 00:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.022 00:14:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.022 00:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.022 00:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6591808 kB' 'MemUsed: 5650164 kB' 'SwapCached: 0 kB' 'Active: 491304 kB' 'Inactive: 2732460 kB' 'Active(anon): 129080 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 3105152 kB' 'Mapped: 48756 kB' 'AnonPages: 120212 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87084 kB' 'Slab: 166736 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.022 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.022 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.023 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.023 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.023 00:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.023 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.023 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.023 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.023 00:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.023 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.023 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.023 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.023 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.023 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.023 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.023 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.023 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.023 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.023 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.023 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.023 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.023 00:14:11 -- setup/common.sh@33 -- # echo 0 00:05:24.023 00:14:11 -- setup/common.sh@33 -- # return 0 00:05:24.023 00:14:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.023 00:14:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.023 00:14:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.023 00:14:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.023 node0=1024 expecting 1024 00:05:24.023 00:14:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:24.023 00:14:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:24.023 00:05:24.023 real 0m0.986s 00:05:24.023 user 0m0.442s 00:05:24.023 sys 0m0.485s 00:05:24.023 00:14:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.023 00:14:11 -- common/autotest_common.sh@10 -- # set +x 00:05:24.023 ************************************ 00:05:24.023 END TEST default_setup 00:05:24.023 ************************************ 00:05:24.023 00:14:11 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:24.023 00:14:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.023 00:14:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.023 00:14:11 -- common/autotest_common.sh@10 -- # set +x 00:05:24.023 ************************************ 00:05:24.023 START TEST per_node_1G_alloc 00:05:24.023 ************************************ 00:05:24.023 00:14:11 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:24.023 00:14:11 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:24.023 00:14:11 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:24.023 00:14:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:24.023 00:14:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:24.023 00:14:11 -- setup/hugepages.sh@51 -- # shift 00:05:24.023 00:14:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:24.023 00:14:11 -- setup/hugepages.sh@52 -- # local node_ids 00:05:24.023 00:14:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.023 00:14:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:24.023 00:14:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:24.023 00:14:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:24.023 00:14:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.023 00:14:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:24.023 00:14:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.023 00:14:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.023 00:14:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.023 00:14:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:24.023 00:14:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:24.023 00:14:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:24.023 00:14:11 -- setup/hugepages.sh@73 -- # return 0 00:05:24.023 00:14:11 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:24.023 00:14:11 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:24.023 00:14:11 -- setup/hugepages.sh@146 -- # setup output 00:05:24.023 00:14:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.023 00:14:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.644 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.644 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.644 00:14:11 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:24.644 00:14:11 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:24.644 00:14:11 -- setup/hugepages.sh@89 -- # local node 00:05:24.644 00:14:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.645 00:14:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.645 00:14:11 -- setup/hugepages.sh@92 -- # local surp 00:05:24.645 00:14:11 -- setup/hugepages.sh@93 -- # local resv 00:05:24.645 00:14:11 -- setup/hugepages.sh@94 -- # local anon 00:05:24.645 00:14:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.645 00:14:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.645 00:14:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.645 00:14:11 -- setup/common.sh@18 -- # local node= 00:05:24.645 00:14:11 -- setup/common.sh@19 -- # local var val 00:05:24.645 00:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.645 00:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.645 00:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.645 00:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.645 00:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.645 00:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7641280 kB' 'MemAvailable: 10542344 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491484 kB' 'Inactive: 2732460 kB' 'Active(anon): 129260 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120380 kB' 'Mapped: 48676 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166788 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79704 kB' 'KernelStack: 6548 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.645 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.645 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.646 00:14:11 -- setup/common.sh@33 -- # echo 0 00:05:24.646 00:14:11 -- setup/common.sh@33 -- # return 0 00:05:24.646 00:14:11 -- setup/hugepages.sh@97 -- # anon=0 00:05:24.646 00:14:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.646 00:14:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.646 00:14:11 -- setup/common.sh@18 -- # local node= 00:05:24.646 00:14:11 -- setup/common.sh@19 -- # local var val 00:05:24.646 00:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.646 00:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.646 00:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.646 00:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.646 00:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.646 00:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7641280 kB' 'MemAvailable: 10542344 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491396 kB' 'Inactive: 2732460 kB' 'Active(anon): 129172 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120244 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166788 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79704 kB' 'KernelStack: 6528 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.646 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.646 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.647 00:14:11 -- setup/common.sh@33 -- # echo 0 00:05:24.647 00:14:11 -- setup/common.sh@33 -- # return 0 00:05:24.647 00:14:11 -- setup/hugepages.sh@99 -- # surp=0 00:05:24.647 00:14:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.647 00:14:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.647 00:14:11 -- setup/common.sh@18 -- # local node= 00:05:24.647 00:14:11 -- setup/common.sh@19 -- # local var val 00:05:24.647 00:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.647 00:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.647 00:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.647 00:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.647 00:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.647 00:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7641280 kB' 'MemAvailable: 10542344 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491432 kB' 'Inactive: 2732460 kB' 'Active(anon): 129208 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120316 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166788 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79704 kB' 'KernelStack: 6528 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.647 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.647 00:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.648 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.648 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.648 00:14:11 -- setup/common.sh@33 -- # echo 0 00:05:24.648 00:14:11 -- setup/common.sh@33 -- # return 0 00:05:24.648 00:14:11 -- setup/hugepages.sh@100 -- # resv=0 00:05:24.648 nr_hugepages=512 00:05:24.648 00:14:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:24.648 resv_hugepages=0 00:05:24.648 00:14:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.648 surplus_hugepages=0 00:05:24.648 00:14:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.648 anon_hugepages=0 00:05:24.648 00:14:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.648 00:14:11 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:24.648 00:14:11 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:24.648 00:14:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.648 00:14:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.648 00:14:11 -- setup/common.sh@18 -- # local node= 00:05:24.648 00:14:11 -- setup/common.sh@19 -- # local var val 00:05:24.648 00:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.648 00:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.648 00:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.648 00:14:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.648 00:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.648 00:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7641280 kB' 'MemAvailable: 10542344 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491108 kB' 'Inactive: 2732460 kB' 'Active(anon): 128884 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120008 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166788 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79704 kB' 'KernelStack: 6528 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.649 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.649 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.650 00:14:11 -- setup/common.sh@33 -- # echo 512 00:05:24.650 00:14:11 -- setup/common.sh@33 -- # return 0 00:05:24.650 00:14:11 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:24.650 00:14:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.650 00:14:11 -- setup/hugepages.sh@27 -- # local node 00:05:24.650 00:14:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.650 00:14:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:24.650 00:14:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.650 00:14:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.650 00:14:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.650 00:14:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.650 00:14:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.650 00:14:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.650 00:14:11 -- setup/common.sh@18 -- # local node=0 00:05:24.650 00:14:11 -- setup/common.sh@19 -- # local var val 00:05:24.650 00:14:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.650 00:14:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.650 00:14:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.650 00:14:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.650 00:14:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.650 00:14:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7641280 kB' 'MemUsed: 4600692 kB' 'SwapCached: 0 kB' 'Active: 491316 kB' 'Inactive: 2732460 kB' 'Active(anon): 129092 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 3105152 kB' 'Mapped: 48756 kB' 'AnonPages: 120216 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87084 kB' 'Slab: 166784 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.650 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.650 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # continue 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.651 00:14:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.651 00:14:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.651 00:14:11 -- setup/common.sh@33 -- # echo 0 00:05:24.651 00:14:11 -- setup/common.sh@33 -- # return 0 00:05:24.651 00:14:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.651 00:14:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.651 00:14:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.651 00:14:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.651 node0=512 expecting 512 00:05:24.651 00:14:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:24.651 00:14:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:24.651 00:05:24.651 real 0m0.540s 00:05:24.651 user 0m0.264s 00:05:24.651 sys 0m0.309s 00:05:24.651 00:14:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.651 00:14:11 -- common/autotest_common.sh@10 -- # set +x 00:05:24.651 ************************************ 00:05:24.651 END TEST per_node_1G_alloc 00:05:24.651 ************************************ 00:05:24.651 00:14:11 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:24.651 00:14:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.651 00:14:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.651 00:14:11 -- common/autotest_common.sh@10 -- # set +x 00:05:24.651 ************************************ 00:05:24.651 START TEST even_2G_alloc 00:05:24.651 ************************************ 00:05:24.651 00:14:11 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:24.651 00:14:11 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:24.651 00:14:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:24.651 00:14:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:24.651 00:14:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.651 00:14:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:24.651 00:14:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:24.651 00:14:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:24.651 00:14:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.651 00:14:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:24.651 00:14:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.651 00:14:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.651 00:14:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.651 00:14:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:24.651 00:14:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:24.651 00:14:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.651 00:14:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:24.651 00:14:11 -- setup/hugepages.sh@83 -- # : 0 00:05:24.651 00:14:11 -- setup/hugepages.sh@84 -- # : 0 00:05:24.651 00:14:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.651 00:14:11 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:24.651 00:14:11 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:24.651 00:14:11 -- setup/hugepages.sh@153 -- # setup output 00:05:24.651 00:14:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.651 00:14:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.174 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.174 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.174 00:14:12 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:25.174 00:14:12 -- setup/hugepages.sh@89 -- # local node 00:05:25.174 00:14:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.174 00:14:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.174 00:14:12 -- setup/hugepages.sh@92 -- # local surp 00:05:25.174 00:14:12 -- setup/hugepages.sh@93 -- # local resv 00:05:25.174 00:14:12 -- setup/hugepages.sh@94 -- # local anon 00:05:25.174 00:14:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.174 00:14:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.174 00:14:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.174 00:14:12 -- setup/common.sh@18 -- # local node= 00:05:25.174 00:14:12 -- setup/common.sh@19 -- # local var val 00:05:25.174 00:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.174 00:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.174 00:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.174 00:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.174 00:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.174 00:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6592760 kB' 'MemAvailable: 9493824 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491952 kB' 'Inactive: 2732460 kB' 'Active(anon): 129728 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120868 kB' 'Mapped: 48980 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166752 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79668 kB' 'KernelStack: 6616 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.174 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.174 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.175 00:14:12 -- setup/common.sh@33 -- # echo 0 00:05:25.175 00:14:12 -- setup/common.sh@33 -- # return 0 00:05:25.175 00:14:12 -- setup/hugepages.sh@97 -- # anon=0 00:05:25.175 00:14:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.175 00:14:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.175 00:14:12 -- setup/common.sh@18 -- # local node= 00:05:25.175 00:14:12 -- setup/common.sh@19 -- # local var val 00:05:25.175 00:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.175 00:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.175 00:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.175 00:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.175 00:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.175 00:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6592508 kB' 'MemAvailable: 9493572 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491244 kB' 'Inactive: 2732460 kB' 'Active(anon): 129020 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120204 kB' 'Mapped: 48920 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166752 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79668 kB' 'KernelStack: 6552 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.175 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.175 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.176 00:14:12 -- setup/common.sh@33 -- # echo 0 00:05:25.176 00:14:12 -- setup/common.sh@33 -- # return 0 00:05:25.176 00:14:12 -- setup/hugepages.sh@99 -- # surp=0 00:05:25.176 00:14:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.176 00:14:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.176 00:14:12 -- setup/common.sh@18 -- # local node= 00:05:25.176 00:14:12 -- setup/common.sh@19 -- # local var val 00:05:25.176 00:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.176 00:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.176 00:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.176 00:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.176 00:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.176 00:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.176 00:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6592508 kB' 'MemAvailable: 9493572 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491060 kB' 'Inactive: 2732460 kB' 'Active(anon): 128836 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120224 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166752 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79668 kB' 'KernelStack: 6528 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.176 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.176 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.177 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.177 00:14:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.178 00:14:12 -- setup/common.sh@33 -- # echo 0 00:05:25.178 00:14:12 -- setup/common.sh@33 -- # return 0 00:05:25.178 00:14:12 -- setup/hugepages.sh@100 -- # resv=0 00:05:25.178 nr_hugepages=1024 00:05:25.178 00:14:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:25.178 resv_hugepages=0 00:05:25.178 00:14:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.178 surplus_hugepages=0 00:05:25.178 00:14:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.178 anon_hugepages=0 00:05:25.178 00:14:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.178 00:14:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.178 00:14:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:25.178 00:14:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.178 00:14:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.178 00:14:12 -- setup/common.sh@18 -- # local node= 00:05:25.178 00:14:12 -- setup/common.sh@19 -- # local var val 00:05:25.178 00:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.178 00:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.178 00:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.178 00:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.178 00:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.178 00:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6592760 kB' 'MemAvailable: 9493824 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491232 kB' 'Inactive: 2732460 kB' 'Active(anon): 129008 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120156 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166752 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79668 kB' 'KernelStack: 6528 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.178 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.178 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.179 00:14:12 -- setup/common.sh@33 -- # echo 1024 00:05:25.179 00:14:12 -- setup/common.sh@33 -- # return 0 00:05:25.179 00:14:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.179 00:14:12 -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.179 00:14:12 -- setup/hugepages.sh@27 -- # local node 00:05:25.179 00:14:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.179 00:14:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:25.179 00:14:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.179 00:14:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.179 00:14:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.179 00:14:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.179 00:14:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.179 00:14:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.179 00:14:12 -- setup/common.sh@18 -- # local node=0 00:05:25.179 00:14:12 -- setup/common.sh@19 -- # local var val 00:05:25.179 00:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.179 00:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.179 00:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.179 00:14:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.179 00:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.179 00:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.179 00:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6594400 kB' 'MemUsed: 5647572 kB' 'SwapCached: 0 kB' 'Active: 491416 kB' 'Inactive: 2732460 kB' 'Active(anon): 129192 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 3105152 kB' 'Mapped: 48756 kB' 'AnonPages: 120320 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87084 kB' 'Slab: 166752 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.179 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.179 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.180 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.180 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.180 00:14:12 -- setup/common.sh@33 -- # echo 0 00:05:25.180 00:14:12 -- setup/common.sh@33 -- # return 0 00:05:25.180 00:14:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.180 00:14:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.180 00:14:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.180 00:14:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.180 node0=1024 expecting 1024 00:05:25.180 00:14:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:25.180 00:14:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:25.180 00:05:25.180 real 0m0.527s 00:05:25.180 user 0m0.277s 00:05:25.180 sys 0m0.282s 00:05:25.180 00:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.180 00:14:12 -- common/autotest_common.sh@10 -- # set +x 00:05:25.180 ************************************ 00:05:25.180 END TEST even_2G_alloc 00:05:25.180 ************************************ 00:05:25.180 00:14:12 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:25.180 00:14:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.180 00:14:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.180 00:14:12 -- common/autotest_common.sh@10 -- # set +x 00:05:25.180 ************************************ 00:05:25.180 START TEST odd_alloc 00:05:25.180 ************************************ 00:05:25.180 00:14:12 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:25.180 00:14:12 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:25.180 00:14:12 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:25.180 00:14:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:25.180 00:14:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.180 00:14:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:25.180 00:14:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:25.180 00:14:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:25.180 00:14:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.180 00:14:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:25.180 00:14:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:25.180 00:14:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.180 00:14:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.180 00:14:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:25.180 00:14:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:25.180 00:14:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.180 00:14:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:25.180 00:14:12 -- setup/hugepages.sh@83 -- # : 0 00:05:25.180 00:14:12 -- setup/hugepages.sh@84 -- # : 0 00:05:25.180 00:14:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.180 00:14:12 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:25.180 00:14:12 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:25.180 00:14:12 -- setup/hugepages.sh@160 -- # setup output 00:05:25.180 00:14:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.180 00:14:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.765 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.765 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.765 00:14:12 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:25.765 00:14:12 -- setup/hugepages.sh@89 -- # local node 00:05:25.765 00:14:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.765 00:14:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.765 00:14:12 -- setup/hugepages.sh@92 -- # local surp 00:05:25.765 00:14:12 -- setup/hugepages.sh@93 -- # local resv 00:05:25.765 00:14:12 -- setup/hugepages.sh@94 -- # local anon 00:05:25.765 00:14:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.765 00:14:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.765 00:14:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.765 00:14:12 -- setup/common.sh@18 -- # local node= 00:05:25.765 00:14:12 -- setup/common.sh@19 -- # local var val 00:05:25.765 00:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.765 00:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.765 00:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.765 00:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.765 00:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.765 00:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.765 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.765 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.766 00:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6591152 kB' 'MemAvailable: 9492216 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491820 kB' 'Inactive: 2732460 kB' 'Active(anon): 129596 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120768 kB' 'Mapped: 49004 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166712 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79628 kB' 'KernelStack: 6568 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.766 00:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.766 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.766 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.766 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.766 00:14:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.766 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.766 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.766 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.766 00:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.766 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.766 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.766 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.767 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.767 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.768 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.768 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.769 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.769 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.770 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.770 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.771 00:14:12 -- setup/common.sh@33 -- # echo 0 00:05:25.771 00:14:12 -- setup/common.sh@33 -- # return 0 00:05:25.771 00:14:12 -- setup/hugepages.sh@97 -- # anon=0 00:05:25.771 00:14:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.771 00:14:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.771 00:14:12 -- setup/common.sh@18 -- # local node= 00:05:25.771 00:14:12 -- setup/common.sh@19 -- # local var val 00:05:25.771 00:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.771 00:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.771 00:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.771 00:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.771 00:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.771 00:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6591152 kB' 'MemAvailable: 9492216 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491436 kB' 'Inactive: 2732460 kB' 'Active(anon): 129212 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120392 kB' 'Mapped: 48884 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166704 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79620 kB' 'KernelStack: 6504 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.771 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.771 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.772 00:14:12 -- setup/common.sh@33 -- # echo 0 00:05:25.772 00:14:12 -- setup/common.sh@33 -- # return 0 00:05:25.772 00:14:12 -- setup/hugepages.sh@99 -- # surp=0 00:05:25.772 00:14:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.772 00:14:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.772 00:14:12 -- setup/common.sh@18 -- # local node= 00:05:25.772 00:14:12 -- setup/common.sh@19 -- # local var val 00:05:25.772 00:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.772 00:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.772 00:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.772 00:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.772 00:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.772 00:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6591560 kB' 'MemAvailable: 9492624 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491332 kB' 'Inactive: 2732460 kB' 'Active(anon): 129108 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120236 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166700 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79616 kB' 'KernelStack: 6528 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.772 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.772 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.773 00:14:12 -- setup/common.sh@33 -- # echo 0 00:05:25.773 00:14:12 -- setup/common.sh@33 -- # return 0 00:05:25.773 00:14:12 -- setup/hugepages.sh@100 -- # resv=0 00:05:25.773 nr_hugepages=1025 00:05:25.773 00:14:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:25.773 resv_hugepages=0 00:05:25.773 00:14:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.773 surplus_hugepages=0 00:05:25.773 00:14:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.773 anon_hugepages=0 00:05:25.773 00:14:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.773 00:14:12 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:25.773 00:14:12 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:25.773 00:14:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.773 00:14:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.773 00:14:12 -- setup/common.sh@18 -- # local node= 00:05:25.773 00:14:12 -- setup/common.sh@19 -- # local var val 00:05:25.773 00:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.773 00:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.773 00:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.773 00:14:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.773 00:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.773 00:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6591560 kB' 'MemAvailable: 9492624 kB' 'Buffers: 2436 kB' 'Cached: 3102716 kB' 'SwapCached: 0 kB' 'Active: 491568 kB' 'Inactive: 2732460 kB' 'Active(anon): 129344 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120472 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166700 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79616 kB' 'KernelStack: 6528 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.773 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.773 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.774 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.774 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.775 00:14:12 -- setup/common.sh@33 -- # echo 1025 00:05:25.775 00:14:12 -- setup/common.sh@33 -- # return 0 00:05:25.775 00:14:12 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:25.775 00:14:12 -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.775 00:14:12 -- setup/hugepages.sh@27 -- # local node 00:05:25.775 00:14:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.775 00:14:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:25.775 00:14:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.775 00:14:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.775 00:14:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.775 00:14:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.775 00:14:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.775 00:14:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.775 00:14:12 -- setup/common.sh@18 -- # local node=0 00:05:25.775 00:14:12 -- setup/common.sh@19 -- # local var val 00:05:25.775 00:14:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.775 00:14:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.775 00:14:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.775 00:14:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.775 00:14:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.775 00:14:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6591560 kB' 'MemUsed: 5650412 kB' 'SwapCached: 0 kB' 'Active: 491152 kB' 'Inactive: 2732460 kB' 'Active(anon): 128928 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732460 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 3105152 kB' 'Mapped: 48756 kB' 'AnonPages: 120056 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87084 kB' 'Slab: 166700 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.775 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.775 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # continue 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.776 00:14:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.776 00:14:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.776 00:14:12 -- setup/common.sh@33 -- # echo 0 00:05:25.776 00:14:12 -- setup/common.sh@33 -- # return 0 00:05:25.776 00:14:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.776 00:14:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.776 00:14:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.776 00:14:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.776 node0=1025 expecting 1025 00:05:25.776 00:14:12 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:25.776 00:14:12 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:25.776 00:05:25.776 real 0m0.541s 00:05:25.776 user 0m0.275s 00:05:25.776 sys 0m0.295s 00:05:25.776 00:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.776 00:14:12 -- common/autotest_common.sh@10 -- # set +x 00:05:25.776 ************************************ 00:05:25.776 END TEST odd_alloc 00:05:25.776 ************************************ 00:05:25.776 00:14:12 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:25.776 00:14:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.776 00:14:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.776 00:14:12 -- common/autotest_common.sh@10 -- # set +x 00:05:25.776 ************************************ 00:05:25.776 START TEST custom_alloc 00:05:25.776 ************************************ 00:05:25.776 00:14:12 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:25.776 00:14:12 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:25.776 00:14:12 -- setup/hugepages.sh@169 -- # local node 00:05:25.776 00:14:12 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:25.776 00:14:12 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:25.776 00:14:12 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:25.776 00:14:12 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:25.776 00:14:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:25.776 00:14:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:25.776 00:14:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.776 00:14:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:25.776 00:14:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:25.776 00:14:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:25.776 00:14:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.776 00:14:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:25.776 00:14:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:25.776 00:14:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.776 00:14:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.776 00:14:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:25.776 00:14:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:25.776 00:14:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.776 00:14:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:25.776 00:14:12 -- setup/hugepages.sh@83 -- # : 0 00:05:25.776 00:14:12 -- setup/hugepages.sh@84 -- # : 0 00:05:25.776 00:14:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.776 00:14:12 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:25.776 00:14:12 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:25.776 00:14:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:25.776 00:14:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:25.776 00:14:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:25.776 00:14:12 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:25.776 00:14:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:25.776 00:14:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.776 00:14:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:25.776 00:14:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:25.776 00:14:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.776 00:14:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.776 00:14:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:25.776 00:14:12 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:25.776 00:14:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:25.776 00:14:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:25.776 00:14:12 -- setup/hugepages.sh@78 -- # return 0 00:05:25.776 00:14:12 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:25.776 00:14:12 -- setup/hugepages.sh@187 -- # setup output 00:05:25.776 00:14:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.776 00:14:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.346 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.346 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.346 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.346 00:14:13 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:26.346 00:14:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:26.346 00:14:13 -- setup/hugepages.sh@89 -- # local node 00:05:26.346 00:14:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.346 00:14:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.346 00:14:13 -- setup/hugepages.sh@92 -- # local surp 00:05:26.346 00:14:13 -- setup/hugepages.sh@93 -- # local resv 00:05:26.346 00:14:13 -- setup/hugepages.sh@94 -- # local anon 00:05:26.346 00:14:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.346 00:14:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.346 00:14:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.346 00:14:13 -- setup/common.sh@18 -- # local node= 00:05:26.346 00:14:13 -- setup/common.sh@19 -- # local var val 00:05:26.346 00:14:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.346 00:14:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.346 00:14:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.346 00:14:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.346 00:14:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.346 00:14:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.346 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.346 00:14:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7644264 kB' 'MemAvailable: 10545332 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 491628 kB' 'Inactive: 2732464 kB' 'Active(anon): 129404 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120600 kB' 'Mapped: 48880 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166596 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79512 kB' 'KernelStack: 6536 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:26.346 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.346 00:14:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.346 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.346 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.346 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.346 00:14:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.346 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.346 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.346 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.346 00:14:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.346 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.346 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.346 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.346 00:14:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.346 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.346 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.346 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.347 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.347 00:14:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.348 00:14:13 -- setup/common.sh@33 -- # echo 0 00:05:26.348 00:14:13 -- setup/common.sh@33 -- # return 0 00:05:26.348 00:14:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:26.348 00:14:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.348 00:14:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.348 00:14:13 -- setup/common.sh@18 -- # local node= 00:05:26.348 00:14:13 -- setup/common.sh@19 -- # local var val 00:05:26.348 00:14:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.348 00:14:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.348 00:14:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.348 00:14:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.348 00:14:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.348 00:14:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7644276 kB' 'MemAvailable: 10545344 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 491520 kB' 'Inactive: 2732464 kB' 'Active(anon): 129296 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120448 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166596 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79512 kB' 'KernelStack: 6512 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.348 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.348 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.349 00:14:13 -- setup/common.sh@33 -- # echo 0 00:05:26.349 00:14:13 -- setup/common.sh@33 -- # return 0 00:05:26.349 00:14:13 -- setup/hugepages.sh@99 -- # surp=0 00:05:26.349 00:14:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.349 00:14:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.349 00:14:13 -- setup/common.sh@18 -- # local node= 00:05:26.349 00:14:13 -- setup/common.sh@19 -- # local var val 00:05:26.349 00:14:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.349 00:14:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.349 00:14:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.349 00:14:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.349 00:14:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.349 00:14:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7644276 kB' 'MemAvailable: 10545344 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 491480 kB' 'Inactive: 2732464 kB' 'Active(anon): 129256 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120364 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166592 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79508 kB' 'KernelStack: 6544 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.349 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.349 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.350 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.350 00:14:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.351 00:14:13 -- setup/common.sh@33 -- # echo 0 00:05:26.351 00:14:13 -- setup/common.sh@33 -- # return 0 00:05:26.351 00:14:13 -- setup/hugepages.sh@100 -- # resv=0 00:05:26.351 nr_hugepages=512 00:05:26.351 00:14:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:26.351 resv_hugepages=0 00:05:26.351 00:14:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.351 surplus_hugepages=0 00:05:26.351 00:14:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.351 anon_hugepages=0 00:05:26.351 00:14:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.351 00:14:13 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:26.351 00:14:13 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:26.351 00:14:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.351 00:14:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.351 00:14:13 -- setup/common.sh@18 -- # local node= 00:05:26.351 00:14:13 -- setup/common.sh@19 -- # local var val 00:05:26.351 00:14:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.351 00:14:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.351 00:14:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.351 00:14:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.351 00:14:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.351 00:14:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7644276 kB' 'MemAvailable: 10545344 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 491444 kB' 'Inactive: 2732464 kB' 'Active(anon): 129220 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120332 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166592 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79508 kB' 'KernelStack: 6528 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.351 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.351 00:14:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.352 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.352 00:14:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.353 00:14:13 -- setup/common.sh@33 -- # echo 512 00:05:26.353 00:14:13 -- setup/common.sh@33 -- # return 0 00:05:26.353 00:14:13 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:26.353 00:14:13 -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.353 00:14:13 -- setup/hugepages.sh@27 -- # local node 00:05:26.353 00:14:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.353 00:14:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:26.353 00:14:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.353 00:14:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.353 00:14:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.353 00:14:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.353 00:14:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.353 00:14:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.353 00:14:13 -- setup/common.sh@18 -- # local node=0 00:05:26.353 00:14:13 -- setup/common.sh@19 -- # local var val 00:05:26.353 00:14:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.353 00:14:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.353 00:14:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.353 00:14:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.353 00:14:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.353 00:14:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7644276 kB' 'MemUsed: 4597696 kB' 'SwapCached: 0 kB' 'Active: 491432 kB' 'Inactive: 2732464 kB' 'Active(anon): 129208 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3105156 kB' 'Mapped: 48756 kB' 'AnonPages: 120368 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87084 kB' 'Slab: 166592 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.353 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.353 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.354 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.354 00:14:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.354 00:14:13 -- setup/common.sh@33 -- # echo 0 00:05:26.354 00:14:13 -- setup/common.sh@33 -- # return 0 00:05:26.354 00:14:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.354 00:14:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.354 00:14:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.354 00:14:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.354 node0=512 expecting 512 00:05:26.354 00:14:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:26.354 00:14:13 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:26.354 00:05:26.354 real 0m0.552s 00:05:26.354 user 0m0.274s 00:05:26.354 sys 0m0.297s 00:05:26.354 00:14:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.354 00:14:13 -- common/autotest_common.sh@10 -- # set +x 00:05:26.354 ************************************ 00:05:26.354 END TEST custom_alloc 00:05:26.354 ************************************ 00:05:26.354 00:14:13 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:26.354 00:14:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.354 00:14:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.354 00:14:13 -- common/autotest_common.sh@10 -- # set +x 00:05:26.354 ************************************ 00:05:26.354 START TEST no_shrink_alloc 00:05:26.354 ************************************ 00:05:26.354 00:14:13 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:26.354 00:14:13 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:26.354 00:14:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:26.354 00:14:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:26.354 00:14:13 -- setup/hugepages.sh@51 -- # shift 00:05:26.354 00:14:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:26.354 00:14:13 -- setup/hugepages.sh@52 -- # local node_ids 00:05:26.354 00:14:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:26.354 00:14:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:26.354 00:14:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:26.354 00:14:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:26.354 00:14:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:26.354 00:14:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:26.354 00:14:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:26.354 00:14:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:26.354 00:14:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:26.354 00:14:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:26.354 00:14:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:26.354 00:14:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:26.354 00:14:13 -- setup/hugepages.sh@73 -- # return 0 00:05:26.354 00:14:13 -- setup/hugepages.sh@198 -- # setup output 00:05:26.354 00:14:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.354 00:14:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.925 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.925 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.925 00:14:13 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:26.925 00:14:13 -- setup/hugepages.sh@89 -- # local node 00:05:26.925 00:14:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.925 00:14:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.925 00:14:13 -- setup/hugepages.sh@92 -- # local surp 00:05:26.925 00:14:13 -- setup/hugepages.sh@93 -- # local resv 00:05:26.925 00:14:13 -- setup/hugepages.sh@94 -- # local anon 00:05:26.925 00:14:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.925 00:14:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.925 00:14:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.925 00:14:13 -- setup/common.sh@18 -- # local node= 00:05:26.925 00:14:13 -- setup/common.sh@19 -- # local var val 00:05:26.925 00:14:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.925 00:14:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.925 00:14:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.925 00:14:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.925 00:14:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.925 00:14:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.925 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.925 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.925 00:14:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6599832 kB' 'MemAvailable: 9500900 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 491820 kB' 'Inactive: 2732464 kB' 'Active(anon): 129596 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120700 kB' 'Mapped: 48884 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166580 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79496 kB' 'KernelStack: 6488 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:26.925 00:14:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.925 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.925 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.925 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.925 00:14:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.925 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.925 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.925 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.925 00:14:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.925 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.925 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.925 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.925 00:14:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.925 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.925 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.925 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.925 00:14:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.925 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.925 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.926 00:14:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.926 00:14:13 -- setup/common.sh@33 -- # echo 0 00:05:26.926 00:14:13 -- setup/common.sh@33 -- # return 0 00:05:26.926 00:14:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:26.926 00:14:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.926 00:14:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.926 00:14:13 -- setup/common.sh@18 -- # local node= 00:05:26.926 00:14:13 -- setup/common.sh@19 -- # local var val 00:05:26.926 00:14:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.926 00:14:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.926 00:14:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.926 00:14:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.926 00:14:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.926 00:14:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.926 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6600504 kB' 'MemAvailable: 9501572 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 491644 kB' 'Inactive: 2732464 kB' 'Active(anon): 129420 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120496 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166580 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79496 kB' 'KernelStack: 6496 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.927 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.927 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.928 00:14:14 -- setup/common.sh@33 -- # echo 0 00:05:26.928 00:14:14 -- setup/common.sh@33 -- # return 0 00:05:26.928 00:14:14 -- setup/hugepages.sh@99 -- # surp=0 00:05:26.928 00:14:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.928 00:14:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.928 00:14:14 -- setup/common.sh@18 -- # local node= 00:05:26.928 00:14:14 -- setup/common.sh@19 -- # local var val 00:05:26.928 00:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.928 00:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.928 00:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.928 00:14:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.928 00:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.928 00:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6600504 kB' 'MemAvailable: 9501572 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 491384 kB' 'Inactive: 2732464 kB' 'Active(anon): 129160 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120260 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166580 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79496 kB' 'KernelStack: 6528 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.928 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.928 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.929 00:14:14 -- setup/common.sh@33 -- # echo 0 00:05:26.929 00:14:14 -- setup/common.sh@33 -- # return 0 00:05:26.929 00:14:14 -- setup/hugepages.sh@100 -- # resv=0 00:05:26.929 nr_hugepages=1024 00:05:26.929 00:14:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:26.929 resv_hugepages=0 00:05:26.929 00:14:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.929 surplus_hugepages=0 00:05:26.929 00:14:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.929 00:14:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.929 anon_hugepages=0 00:05:26.929 00:14:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.929 00:14:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:26.929 00:14:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.929 00:14:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.929 00:14:14 -- setup/common.sh@18 -- # local node= 00:05:26.929 00:14:14 -- setup/common.sh@19 -- # local var val 00:05:26.929 00:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.929 00:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.929 00:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.929 00:14:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.929 00:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.929 00:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6601092 kB' 'MemAvailable: 9502160 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 491612 kB' 'Inactive: 2732464 kB' 'Active(anon): 129388 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120492 kB' 'Mapped: 48756 kB' 'Shmem: 10468 kB' 'KReclaimable: 87084 kB' 'Slab: 166580 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79496 kB' 'KernelStack: 6528 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.929 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.929 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.930 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.930 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.931 00:14:14 -- setup/common.sh@33 -- # echo 1024 00:05:26.931 00:14:14 -- setup/common.sh@33 -- # return 0 00:05:26.931 00:14:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.931 00:14:14 -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.931 00:14:14 -- setup/hugepages.sh@27 -- # local node 00:05:26.931 00:14:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.931 00:14:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:26.931 00:14:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.931 00:14:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.931 00:14:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.931 00:14:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.931 00:14:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.931 00:14:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.931 00:14:14 -- setup/common.sh@18 -- # local node=0 00:05:26.931 00:14:14 -- setup/common.sh@19 -- # local var val 00:05:26.931 00:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.931 00:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.931 00:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.931 00:14:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.931 00:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.931 00:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6601760 kB' 'MemUsed: 5640212 kB' 'SwapCached: 0 kB' 'Active: 491500 kB' 'Inactive: 2732464 kB' 'Active(anon): 129276 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3105156 kB' 'Mapped: 48756 kB' 'AnonPages: 120376 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87084 kB' 'Slab: 166580 kB' 'SReclaimable: 87084 kB' 'SUnreclaim: 79496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.931 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.931 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # continue 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.932 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.932 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.932 00:14:14 -- setup/common.sh@33 -- # echo 0 00:05:26.932 00:14:14 -- setup/common.sh@33 -- # return 0 00:05:26.932 00:14:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.932 00:14:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.932 00:14:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.932 00:14:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.932 node0=1024 expecting 1024 00:05:26.932 00:14:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:26.932 00:14:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:26.932 00:14:14 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:26.932 00:14:14 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:26.932 00:14:14 -- setup/hugepages.sh@202 -- # setup output 00:05:26.932 00:14:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.932 00:14:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.502 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.502 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:27.502 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:27.502 00:14:14 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:27.502 00:14:14 -- setup/hugepages.sh@89 -- # local node 00:05:27.502 00:14:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.502 00:14:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.502 00:14:14 -- setup/hugepages.sh@92 -- # local surp 00:05:27.502 00:14:14 -- setup/hugepages.sh@93 -- # local resv 00:05:27.502 00:14:14 -- setup/hugepages.sh@94 -- # local anon 00:05:27.502 00:14:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.502 00:14:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.502 00:14:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.502 00:14:14 -- setup/common.sh@18 -- # local node= 00:05:27.502 00:14:14 -- setup/common.sh@19 -- # local var val 00:05:27.502 00:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.502 00:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.502 00:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.502 00:14:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.502 00:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.502 00:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6612016 kB' 'MemAvailable: 9513080 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 486992 kB' 'Inactive: 2732464 kB' 'Active(anon): 124768 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115920 kB' 'Mapped: 48128 kB' 'Shmem: 10468 kB' 'KReclaimable: 87080 kB' 'Slab: 166356 kB' 'SReclaimable: 87080 kB' 'SUnreclaim: 79276 kB' 'KernelStack: 6408 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.502 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.502 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.503 00:14:14 -- setup/common.sh@33 -- # echo 0 00:05:27.503 00:14:14 -- setup/common.sh@33 -- # return 0 00:05:27.503 00:14:14 -- setup/hugepages.sh@97 -- # anon=0 00:05:27.503 00:14:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.503 00:14:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.503 00:14:14 -- setup/common.sh@18 -- # local node= 00:05:27.503 00:14:14 -- setup/common.sh@19 -- # local var val 00:05:27.503 00:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.503 00:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.503 00:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.503 00:14:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.503 00:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.503 00:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6612692 kB' 'MemAvailable: 9513756 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 486212 kB' 'Inactive: 2732464 kB' 'Active(anon): 123988 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115092 kB' 'Mapped: 48016 kB' 'Shmem: 10468 kB' 'KReclaimable: 87080 kB' 'Slab: 166300 kB' 'SReclaimable: 87080 kB' 'SUnreclaim: 79220 kB' 'KernelStack: 6416 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.503 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.503 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.504 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.504 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.505 00:14:14 -- setup/common.sh@33 -- # echo 0 00:05:27.505 00:14:14 -- setup/common.sh@33 -- # return 0 00:05:27.505 00:14:14 -- setup/hugepages.sh@99 -- # surp=0 00:05:27.505 00:14:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.505 00:14:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.505 00:14:14 -- setup/common.sh@18 -- # local node= 00:05:27.505 00:14:14 -- setup/common.sh@19 -- # local var val 00:05:27.505 00:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.505 00:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.505 00:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.505 00:14:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.505 00:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.505 00:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6612692 kB' 'MemAvailable: 9513756 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 486588 kB' 'Inactive: 2732464 kB' 'Active(anon): 124364 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115472 kB' 'Mapped: 48016 kB' 'Shmem: 10468 kB' 'KReclaimable: 87080 kB' 'Slab: 166300 kB' 'SReclaimable: 87080 kB' 'SUnreclaim: 79220 kB' 'KernelStack: 6416 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.505 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.505 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.506 00:14:14 -- setup/common.sh@33 -- # echo 0 00:05:27.506 00:14:14 -- setup/common.sh@33 -- # return 0 00:05:27.506 00:14:14 -- setup/hugepages.sh@100 -- # resv=0 00:05:27.506 nr_hugepages=1024 00:05:27.506 00:14:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:27.506 resv_hugepages=0 00:05:27.506 00:14:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.506 surplus_hugepages=0 00:05:27.506 00:14:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.506 anon_hugepages=0 00:05:27.506 00:14:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.506 00:14:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.506 00:14:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:27.506 00:14:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.506 00:14:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.506 00:14:14 -- setup/common.sh@18 -- # local node= 00:05:27.506 00:14:14 -- setup/common.sh@19 -- # local var val 00:05:27.506 00:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.506 00:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.506 00:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.506 00:14:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.506 00:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.506 00:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6612692 kB' 'MemAvailable: 9513756 kB' 'Buffers: 2436 kB' 'Cached: 3102720 kB' 'SwapCached: 0 kB' 'Active: 486588 kB' 'Inactive: 2732464 kB' 'Active(anon): 124364 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115128 kB' 'Mapped: 48016 kB' 'Shmem: 10468 kB' 'KReclaimable: 87080 kB' 'Slab: 166296 kB' 'SReclaimable: 87080 kB' 'SUnreclaim: 79216 kB' 'KernelStack: 6416 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 334116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.506 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.506 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.507 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.507 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.508 00:14:14 -- setup/common.sh@33 -- # echo 1024 00:05:27.508 00:14:14 -- setup/common.sh@33 -- # return 0 00:05:27.508 00:14:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.508 00:14:14 -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.508 00:14:14 -- setup/hugepages.sh@27 -- # local node 00:05:27.508 00:14:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.508 00:14:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:27.508 00:14:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:27.508 00:14:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.508 00:14:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.508 00:14:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.508 00:14:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.508 00:14:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.508 00:14:14 -- setup/common.sh@18 -- # local node=0 00:05:27.508 00:14:14 -- setup/common.sh@19 -- # local var val 00:05:27.508 00:14:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:27.508 00:14:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.508 00:14:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.508 00:14:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.508 00:14:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.508 00:14:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6612944 kB' 'MemUsed: 5629028 kB' 'SwapCached: 0 kB' 'Active: 486200 kB' 'Inactive: 2732464 kB' 'Active(anon): 123976 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3105156 kB' 'Mapped: 48016 kB' 'AnonPages: 115144 kB' 'Shmem: 10468 kB' 'KernelStack: 6432 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87080 kB' 'Slab: 166288 kB' 'SReclaimable: 87080 kB' 'SUnreclaim: 79208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.508 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.508 00:14:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # continue 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:27.509 00:14:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:27.509 00:14:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.509 00:14:14 -- setup/common.sh@33 -- # echo 0 00:05:27.509 00:14:14 -- setup/common.sh@33 -- # return 0 00:05:27.509 00:14:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.509 00:14:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.509 00:14:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.509 00:14:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.509 node0=1024 expecting 1024 00:05:27.509 00:14:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:27.509 00:14:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:27.509 00:05:27.509 real 0m1.114s 00:05:27.509 user 0m0.555s 00:05:27.509 sys 0m0.600s 00:05:27.509 00:14:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.509 00:14:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.509 ************************************ 00:05:27.509 END TEST no_shrink_alloc 00:05:27.509 ************************************ 00:05:27.509 00:14:14 -- setup/hugepages.sh@217 -- # clear_hp 00:05:27.509 00:14:14 -- setup/hugepages.sh@37 -- # local node hp 00:05:27.509 00:14:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:27.509 00:14:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:27.509 00:14:14 -- setup/hugepages.sh@41 -- # echo 0 00:05:27.509 00:14:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:27.509 00:14:14 -- setup/hugepages.sh@41 -- # echo 0 00:05:27.767 00:14:14 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:27.767 00:14:14 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:27.767 00:05:27.767 real 0m4.714s 00:05:27.767 user 0m2.250s 00:05:27.767 sys 0m2.528s 00:05:27.767 00:14:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.767 00:14:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.767 ************************************ 00:05:27.767 END TEST hugepages 00:05:27.767 ************************************ 00:05:27.767 00:14:14 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:27.767 00:14:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.767 00:14:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.767 00:14:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.767 ************************************ 00:05:27.767 START TEST driver 00:05:27.767 ************************************ 00:05:27.767 00:14:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:27.767 * Looking for test storage... 00:05:27.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:27.767 00:14:14 -- setup/driver.sh@68 -- # setup reset 00:05:27.767 00:14:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:27.767 00:14:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.333 00:14:15 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:28.333 00:14:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.333 00:14:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.333 00:14:15 -- common/autotest_common.sh@10 -- # set +x 00:05:28.333 ************************************ 00:05:28.333 START TEST guess_driver 00:05:28.333 ************************************ 00:05:28.333 00:14:15 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:28.333 00:14:15 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:28.333 00:14:15 -- setup/driver.sh@47 -- # local fail=0 00:05:28.333 00:14:15 -- setup/driver.sh@49 -- # pick_driver 00:05:28.333 00:14:15 -- setup/driver.sh@36 -- # vfio 00:05:28.333 00:14:15 -- setup/driver.sh@21 -- # local iommu_grups 00:05:28.333 00:14:15 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:28.333 00:14:15 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:28.333 00:14:15 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:28.333 00:14:15 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:28.333 00:14:15 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:28.333 00:14:15 -- setup/driver.sh@32 -- # return 1 00:05:28.333 00:14:15 -- setup/driver.sh@38 -- # uio 00:05:28.333 00:14:15 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:28.333 00:14:15 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:28.333 00:14:15 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:28.333 00:14:15 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:28.333 00:14:15 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:28.333 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:28.333 00:14:15 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:28.333 Looking for driver=uio_pci_generic 00:05:28.333 00:14:15 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:28.333 00:14:15 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:28.333 00:14:15 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:28.333 00:14:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.333 00:14:15 -- setup/driver.sh@45 -- # setup output config 00:05:28.333 00:14:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.333 00:14:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.900 00:14:16 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:28.900 00:14:16 -- setup/driver.sh@58 -- # continue 00:05:28.900 00:14:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.158 00:14:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.158 00:14:16 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:29.158 00:14:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.158 00:14:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:29.158 00:14:16 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:29.158 00:14:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:29.158 00:14:16 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:29.158 00:14:16 -- setup/driver.sh@65 -- # setup reset 00:05:29.158 00:14:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.158 00:14:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.725 00:05:29.725 real 0m1.353s 00:05:29.725 user 0m0.505s 00:05:29.725 sys 0m0.862s 00:05:29.725 ************************************ 00:05:29.725 END TEST guess_driver 00:05:29.725 ************************************ 00:05:29.725 00:14:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.725 00:14:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.725 ************************************ 00:05:29.725 END TEST driver 00:05:29.725 ************************************ 00:05:29.725 00:05:29.725 real 0m2.043s 00:05:29.725 user 0m0.733s 00:05:29.725 sys 0m1.369s 00:05:29.725 00:14:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.725 00:14:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.725 00:14:16 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:29.725 00:14:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.725 00:14:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.725 00:14:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.725 ************************************ 00:05:29.725 START TEST devices 00:05:29.725 ************************************ 00:05:29.725 00:14:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:29.725 * Looking for test storage... 00:05:29.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:29.983 00:14:16 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:29.983 00:14:16 -- setup/devices.sh@192 -- # setup reset 00:05:29.983 00:14:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.983 00:14:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.550 00:14:17 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:30.550 00:14:17 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:30.550 00:14:17 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:30.550 00:14:17 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:30.550 00:14:17 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:30.550 00:14:17 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:30.550 00:14:17 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:30.550 00:14:17 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.550 00:14:17 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:30.550 00:14:17 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:30.550 00:14:17 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:30.550 00:14:17 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:30.550 00:14:17 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:30.550 00:14:17 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:30.550 00:14:17 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:30.550 00:14:17 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:30.550 00:14:17 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:30.550 00:14:17 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:30.550 00:14:17 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:30.550 00:14:17 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:30.550 00:14:17 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:30.550 00:14:17 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:30.550 00:14:17 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:30.550 00:14:17 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:30.550 00:14:17 -- setup/devices.sh@196 -- # blocks=() 00:05:30.550 00:14:17 -- setup/devices.sh@196 -- # declare -a blocks 00:05:30.550 00:14:17 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:30.550 00:14:17 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:30.550 00:14:17 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:30.550 00:14:17 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:30.550 00:14:17 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:30.550 00:14:17 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:30.550 00:14:17 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:30.550 00:14:17 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:30.550 00:14:17 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:30.551 00:14:17 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:30.551 00:14:17 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:30.551 No valid GPT data, bailing 00:05:30.551 00:14:17 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.551 00:14:17 -- scripts/common.sh@393 -- # pt= 00:05:30.551 00:14:17 -- scripts/common.sh@394 -- # return 1 00:05:30.551 00:14:17 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:30.551 00:14:17 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:30.551 00:14:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:30.551 00:14:17 -- setup/common.sh@80 -- # echo 5368709120 00:05:30.551 00:14:17 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:30.551 00:14:17 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:30.551 00:14:17 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:30.551 00:14:17 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:30.551 00:14:17 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:30.551 00:14:17 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:30.551 00:14:17 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:30.551 00:14:17 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:30.551 00:14:17 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:30.551 00:14:17 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:30.551 00:14:17 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:30.810 No valid GPT data, bailing 00:05:30.810 00:14:17 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:30.810 00:14:17 -- scripts/common.sh@393 -- # pt= 00:05:30.810 00:14:17 -- scripts/common.sh@394 -- # return 1 00:05:30.810 00:14:17 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:30.810 00:14:17 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:30.810 00:14:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:30.810 00:14:17 -- setup/common.sh@80 -- # echo 4294967296 00:05:30.810 00:14:17 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:30.810 00:14:17 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:30.810 00:14:17 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:30.810 00:14:17 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:30.810 00:14:17 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:30.810 00:14:17 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:30.810 00:14:17 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:30.810 00:14:17 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:30.810 00:14:17 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:30.810 00:14:17 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:30.810 00:14:17 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:30.810 No valid GPT data, bailing 00:05:30.810 00:14:17 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:30.810 00:14:17 -- scripts/common.sh@393 -- # pt= 00:05:30.810 00:14:17 -- scripts/common.sh@394 -- # return 1 00:05:30.810 00:14:17 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:30.810 00:14:17 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:30.810 00:14:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:30.810 00:14:17 -- setup/common.sh@80 -- # echo 4294967296 00:05:30.810 00:14:17 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:30.810 00:14:17 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:30.810 00:14:17 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:30.810 00:14:17 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:30.810 00:14:17 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:30.810 00:14:17 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:30.810 00:14:17 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:30.810 00:14:17 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:30.810 00:14:17 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:30.810 00:14:17 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:30.810 00:14:17 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:30.810 No valid GPT data, bailing 00:05:30.810 00:14:17 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:30.810 00:14:18 -- scripts/common.sh@393 -- # pt= 00:05:30.810 00:14:18 -- scripts/common.sh@394 -- # return 1 00:05:30.810 00:14:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:30.810 00:14:18 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:30.810 00:14:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:30.810 00:14:18 -- setup/common.sh@80 -- # echo 4294967296 00:05:30.810 00:14:18 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:30.810 00:14:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:30.810 00:14:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:30.810 00:14:18 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:30.810 00:14:18 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:30.810 00:14:18 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:30.810 00:14:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.810 00:14:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.810 00:14:18 -- common/autotest_common.sh@10 -- # set +x 00:05:30.810 ************************************ 00:05:30.810 START TEST nvme_mount 00:05:30.810 ************************************ 00:05:30.810 00:14:18 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:30.810 00:14:18 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:30.810 00:14:18 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:30.810 00:14:18 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.810 00:14:18 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:30.810 00:14:18 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:30.810 00:14:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:30.810 00:14:18 -- setup/common.sh@40 -- # local part_no=1 00:05:30.810 00:14:18 -- setup/common.sh@41 -- # local size=1073741824 00:05:30.810 00:14:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:30.810 00:14:18 -- setup/common.sh@44 -- # parts=() 00:05:30.810 00:14:18 -- setup/common.sh@44 -- # local parts 00:05:30.810 00:14:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:30.810 00:14:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.810 00:14:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:30.810 00:14:18 -- setup/common.sh@46 -- # (( part++ )) 00:05:30.810 00:14:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.810 00:14:18 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:30.810 00:14:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:30.810 00:14:18 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:32.186 Creating new GPT entries in memory. 00:05:32.186 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:32.186 other utilities. 00:05:32.186 00:14:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:32.186 00:14:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.186 00:14:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:32.186 00:14:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:32.186 00:14:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:33.120 Creating new GPT entries in memory. 00:05:33.120 The operation has completed successfully. 00:05:33.120 00:14:20 -- setup/common.sh@57 -- # (( part++ )) 00:05:33.120 00:14:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.120 00:14:20 -- setup/common.sh@62 -- # wait 65839 00:05:33.120 00:14:20 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.120 00:14:20 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:33.120 00:14:20 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.120 00:14:20 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:33.120 00:14:20 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:33.120 00:14:20 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.120 00:14:20 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:33.120 00:14:20 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:33.120 00:14:20 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:33.120 00:14:20 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.120 00:14:20 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:33.120 00:14:20 -- setup/devices.sh@53 -- # local found=0 00:05:33.120 00:14:20 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.120 00:14:20 -- setup/devices.sh@56 -- # : 00:05:33.120 00:14:20 -- setup/devices.sh@59 -- # local pci status 00:05:33.120 00:14:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.120 00:14:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:33.120 00:14:20 -- setup/devices.sh@47 -- # setup output config 00:05:33.120 00:14:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.120 00:14:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.120 00:14:20 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.120 00:14:20 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:33.120 00:14:20 -- setup/devices.sh@63 -- # found=1 00:05:33.120 00:14:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.120 00:14:20 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.120 00:14:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.686 00:14:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.686 00:14:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.686 00:14:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.686 00:14:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.686 00:14:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.686 00:14:20 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:33.686 00:14:20 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.686 00:14:20 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.686 00:14:20 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:33.686 00:14:20 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:33.686 00:14:20 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.686 00:14:20 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.686 00:14:20 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.686 00:14:20 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:33.686 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.687 00:14:20 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.687 00:14:20 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:33.945 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:33.945 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:33.945 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:33.945 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:33.945 00:14:21 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:33.945 00:14:21 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:33.945 00:14:21 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.945 00:14:21 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:33.945 00:14:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:33.945 00:14:21 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.945 00:14:21 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:33.945 00:14:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:33.945 00:14:21 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:33.945 00:14:21 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.945 00:14:21 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:33.945 00:14:21 -- setup/devices.sh@53 -- # local found=0 00:05:33.945 00:14:21 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.945 00:14:21 -- setup/devices.sh@56 -- # : 00:05:33.945 00:14:21 -- setup/devices.sh@59 -- # local pci status 00:05:33.945 00:14:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.945 00:14:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:33.945 00:14:21 -- setup/devices.sh@47 -- # setup output config 00:05:33.945 00:14:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.945 00:14:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.204 00:14:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.204 00:14:21 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:34.204 00:14:21 -- setup/devices.sh@63 -- # found=1 00:05:34.204 00:14:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.204 00:14:21 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.204 00:14:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.462 00:14:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.462 00:14:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.720 00:14:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.720 00:14:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.720 00:14:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.720 00:14:21 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:34.720 00:14:21 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.720 00:14:21 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:34.720 00:14:21 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:34.720 00:14:21 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.720 00:14:21 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:34.720 00:14:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:34.720 00:14:21 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:34.720 00:14:21 -- setup/devices.sh@50 -- # local mount_point= 00:05:34.720 00:14:21 -- setup/devices.sh@51 -- # local test_file= 00:05:34.720 00:14:21 -- setup/devices.sh@53 -- # local found=0 00:05:34.720 00:14:21 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:34.720 00:14:21 -- setup/devices.sh@59 -- # local pci status 00:05:34.720 00:14:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.720 00:14:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:34.720 00:14:21 -- setup/devices.sh@47 -- # setup output config 00:05:34.720 00:14:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.721 00:14:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.979 00:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.979 00:14:22 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:34.979 00:14:22 -- setup/devices.sh@63 -- # found=1 00:05:34.979 00:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.979 00:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.979 00:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.238 00:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.238 00:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.238 00:14:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.238 00:14:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.496 00:14:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.496 00:14:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:35.496 00:14:22 -- setup/devices.sh@68 -- # return 0 00:05:35.496 00:14:22 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:35.496 00:14:22 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.497 00:14:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.497 00:14:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.497 00:14:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.497 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.497 00:05:35.497 real 0m4.494s 00:05:35.497 user 0m0.989s 00:05:35.497 sys 0m1.213s 00:05:35.497 00:14:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.497 ************************************ 00:05:35.497 00:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:35.497 END TEST nvme_mount 00:05:35.497 ************************************ 00:05:35.497 00:14:22 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:35.497 00:14:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.497 00:14:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.497 00:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:35.497 ************************************ 00:05:35.497 START TEST dm_mount 00:05:35.497 ************************************ 00:05:35.497 00:14:22 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:35.497 00:14:22 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:35.497 00:14:22 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:35.497 00:14:22 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:35.497 00:14:22 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:35.497 00:14:22 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:35.497 00:14:22 -- setup/common.sh@40 -- # local part_no=2 00:05:35.497 00:14:22 -- setup/common.sh@41 -- # local size=1073741824 00:05:35.497 00:14:22 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:35.497 00:14:22 -- setup/common.sh@44 -- # parts=() 00:05:35.497 00:14:22 -- setup/common.sh@44 -- # local parts 00:05:35.497 00:14:22 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:35.497 00:14:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.497 00:14:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:35.497 00:14:22 -- setup/common.sh@46 -- # (( part++ )) 00:05:35.497 00:14:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.497 00:14:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:35.497 00:14:22 -- setup/common.sh@46 -- # (( part++ )) 00:05:35.497 00:14:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.497 00:14:22 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:35.497 00:14:22 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:35.497 00:14:22 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:36.430 Creating new GPT entries in memory. 00:05:36.430 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:36.430 other utilities. 00:05:36.430 00:14:23 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:36.430 00:14:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.430 00:14:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:36.430 00:14:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:36.430 00:14:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:37.806 Creating new GPT entries in memory. 00:05:37.806 The operation has completed successfully. 00:05:37.806 00:14:24 -- setup/common.sh@57 -- # (( part++ )) 00:05:37.806 00:14:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:37.806 00:14:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:37.806 00:14:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:37.806 00:14:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:38.740 The operation has completed successfully. 00:05:38.740 00:14:25 -- setup/common.sh@57 -- # (( part++ )) 00:05:38.740 00:14:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.740 00:14:25 -- setup/common.sh@62 -- # wait 66298 00:05:38.740 00:14:25 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:38.740 00:14:25 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:38.740 00:14:25 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:38.740 00:14:25 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:38.740 00:14:25 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:38.740 00:14:25 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.740 00:14:25 -- setup/devices.sh@161 -- # break 00:05:38.740 00:14:25 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.740 00:14:25 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:38.740 00:14:25 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:38.740 00:14:25 -- setup/devices.sh@166 -- # dm=dm-0 00:05:38.740 00:14:25 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:38.740 00:14:25 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:38.741 00:14:25 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:38.741 00:14:25 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:38.741 00:14:25 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:38.741 00:14:25 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.741 00:14:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:38.741 00:14:25 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:38.741 00:14:25 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:38.741 00:14:25 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:38.741 00:14:25 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:38.741 00:14:25 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:38.741 00:14:25 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:38.741 00:14:25 -- setup/devices.sh@53 -- # local found=0 00:05:38.741 00:14:25 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:38.741 00:14:25 -- setup/devices.sh@56 -- # : 00:05:38.741 00:14:25 -- setup/devices.sh@59 -- # local pci status 00:05:38.741 00:14:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.741 00:14:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:38.741 00:14:25 -- setup/devices.sh@47 -- # setup output config 00:05:38.741 00:14:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.741 00:14:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:38.741 00:14:25 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:38.741 00:14:25 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:38.741 00:14:25 -- setup/devices.sh@63 -- # found=1 00:05:38.741 00:14:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.741 00:14:25 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:38.741 00:14:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.306 00:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:39.306 00:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.306 00:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:39.306 00:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.306 00:14:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:39.306 00:14:26 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:39.306 00:14:26 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:39.306 00:14:26 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:39.306 00:14:26 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:39.306 00:14:26 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:39.306 00:14:26 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:39.306 00:14:26 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:39.306 00:14:26 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:39.306 00:14:26 -- setup/devices.sh@50 -- # local mount_point= 00:05:39.306 00:14:26 -- setup/devices.sh@51 -- # local test_file= 00:05:39.306 00:14:26 -- setup/devices.sh@53 -- # local found=0 00:05:39.306 00:14:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:39.306 00:14:26 -- setup/devices.sh@59 -- # local pci status 00:05:39.306 00:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.306 00:14:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:39.306 00:14:26 -- setup/devices.sh@47 -- # setup output config 00:05:39.306 00:14:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.306 00:14:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:39.563 00:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:39.563 00:14:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:39.563 00:14:26 -- setup/devices.sh@63 -- # found=1 00:05:39.563 00:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.563 00:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:39.563 00:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.821 00:14:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:39.821 00:14:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:39.821 00:14:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:39.821 00:14:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.079 00:14:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:40.079 00:14:27 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:40.079 00:14:27 -- setup/devices.sh@68 -- # return 0 00:05:40.079 00:14:27 -- setup/devices.sh@187 -- # cleanup_dm 00:05:40.079 00:14:27 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.079 00:14:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:40.079 00:14:27 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:40.079 00:14:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:40.079 00:14:27 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:40.079 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:40.079 00:14:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:40.079 00:14:27 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:40.079 00:05:40.079 real 0m4.560s 00:05:40.079 user 0m0.678s 00:05:40.079 sys 0m0.817s 00:05:40.079 00:14:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.079 00:14:27 -- common/autotest_common.sh@10 -- # set +x 00:05:40.079 ************************************ 00:05:40.079 END TEST dm_mount 00:05:40.079 ************************************ 00:05:40.079 00:14:27 -- setup/devices.sh@1 -- # cleanup 00:05:40.079 00:14:27 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:40.079 00:14:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:40.079 00:14:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:40.079 00:14:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:40.079 00:14:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:40.079 00:14:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:40.336 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:40.336 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:40.336 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:40.336 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:40.336 00:14:27 -- setup/devices.sh@12 -- # cleanup_dm 00:05:40.336 00:14:27 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.337 00:14:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:40.337 00:14:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:40.337 00:14:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:40.337 00:14:27 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:40.337 00:14:27 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:40.337 00:05:40.337 real 0m10.598s 00:05:40.337 user 0m2.311s 00:05:40.337 sys 0m2.634s 00:05:40.337 00:14:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.337 ************************************ 00:05:40.337 END TEST devices 00:05:40.337 00:14:27 -- common/autotest_common.sh@10 -- # set +x 00:05:40.337 ************************************ 00:05:40.337 00:05:40.337 real 0m21.889s 00:05:40.337 user 0m7.183s 00:05:40.337 sys 0m9.126s 00:05:40.337 00:14:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.337 ************************************ 00:05:40.337 END TEST setup.sh 00:05:40.337 ************************************ 00:05:40.337 00:14:27 -- common/autotest_common.sh@10 -- # set +x 00:05:40.337 00:14:27 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:40.594 Hugepages 00:05:40.594 node hugesize free / total 00:05:40.594 node0 1048576kB 0 / 0 00:05:40.594 node0 2048kB 2048 / 2048 00:05:40.594 00:05:40.594 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:40.594 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:40.861 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:40.861 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:40.861 00:14:27 -- spdk/autotest.sh@141 -- # uname -s 00:05:40.861 00:14:27 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:40.861 00:14:27 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:40.861 00:14:27 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.442 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.700 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.700 00:14:28 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:42.633 00:14:29 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:42.633 00:14:29 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:42.633 00:14:29 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:42.633 00:14:29 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:42.633 00:14:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:42.633 00:14:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:42.633 00:14:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:42.633 00:14:29 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:42.633 00:14:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:42.633 00:14:29 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:42.633 00:14:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:42.633 00:14:29 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:43.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.200 Waiting for block devices as requested 00:05:43.200 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:43.200 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:43.200 00:14:30 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:43.200 00:14:30 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:43.200 00:14:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:43.200 00:14:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:43.460 00:14:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:43.460 00:14:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:43.460 00:14:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:43.460 00:14:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:43.460 00:14:30 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:43.460 00:14:30 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:43.460 00:14:30 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:43.460 00:14:30 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:43.460 00:14:30 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:43.460 00:14:30 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:43.460 00:14:30 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:43.460 00:14:30 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:43.460 00:14:30 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:43.460 00:14:30 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:43.460 00:14:30 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:43.460 00:14:30 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:43.460 00:14:30 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:43.460 00:14:30 -- common/autotest_common.sh@1542 -- # continue 00:05:43.460 00:14:30 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:43.460 00:14:30 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:43.460 00:14:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:43.460 00:14:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:43.460 00:14:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:43.460 00:14:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:43.460 00:14:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:43.460 00:14:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:43.460 00:14:30 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:43.460 00:14:30 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:43.460 00:14:30 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:43.460 00:14:30 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:43.460 00:14:30 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:43.460 00:14:30 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:43.460 00:14:30 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:43.460 00:14:30 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:43.460 00:14:30 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:43.460 00:14:30 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:43.460 00:14:30 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:43.460 00:14:30 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:43.460 00:14:30 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:43.460 00:14:30 -- common/autotest_common.sh@1542 -- # continue 00:05:43.460 00:14:30 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:43.460 00:14:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:43.460 00:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:43.460 00:14:30 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:43.460 00:14:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:43.460 00:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:43.460 00:14:30 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.289 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.289 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.289 00:14:31 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:44.289 00:14:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:44.289 00:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:44.289 00:14:31 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:44.289 00:14:31 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:44.289 00:14:31 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:44.289 00:14:31 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:44.289 00:14:31 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:44.289 00:14:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:44.289 00:14:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:44.289 00:14:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:44.289 00:14:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.289 00:14:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:44.289 00:14:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:44.289 00:14:31 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:44.289 00:14:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:44.289 00:14:31 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:44.289 00:14:31 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:44.289 00:14:31 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:44.289 00:14:31 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.289 00:14:31 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:44.289 00:14:31 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:44.289 00:14:31 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:44.289 00:14:31 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.289 00:14:31 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:44.289 00:14:31 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:44.289 00:14:31 -- common/autotest_common.sh@1578 -- # return 0 00:05:44.289 00:14:31 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:44.289 00:14:31 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:44.289 00:14:31 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:44.289 00:14:31 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:44.289 00:14:31 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:44.289 00:14:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:44.289 00:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:44.289 00:14:31 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:44.289 00:14:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.289 00:14:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.289 00:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:44.289 ************************************ 00:05:44.289 START TEST env 00:05:44.289 ************************************ 00:05:44.289 00:14:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:44.548 * Looking for test storage... 00:05:44.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:44.548 00:14:31 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.548 00:14:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.548 00:14:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.548 00:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:44.548 ************************************ 00:05:44.548 START TEST env_memory 00:05:44.548 ************************************ 00:05:44.548 00:14:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.548 00:05:44.548 00:05:44.548 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.548 http://cunit.sourceforge.net/ 00:05:44.548 00:05:44.548 00:05:44.548 Suite: memory 00:05:44.548 Test: alloc and free memory map ...[2024-07-13 00:14:31.655000] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:44.548 passed 00:05:44.548 Test: mem map translation ...[2024-07-13 00:14:31.686383] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:44.548 [2024-07-13 00:14:31.686422] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:44.548 [2024-07-13 00:14:31.686490] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:44.548 [2024-07-13 00:14:31.686503] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:44.548 passed 00:05:44.548 Test: mem map registration ...[2024-07-13 00:14:31.750515] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:44.548 [2024-07-13 00:14:31.750551] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:44.548 passed 00:05:44.807 Test: mem map adjacent registrations ...passed 00:05:44.807 00:05:44.807 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.807 suites 1 1 n/a 0 0 00:05:44.807 tests 4 4 4 0 0 00:05:44.807 asserts 152 152 152 0 n/a 00:05:44.807 00:05:44.807 Elapsed time = 0.214 seconds 00:05:44.807 00:05:44.807 real 0m0.235s 00:05:44.807 user 0m0.216s 00:05:44.807 sys 0m0.014s 00:05:44.807 00:14:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.807 00:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:44.807 ************************************ 00:05:44.807 END TEST env_memory 00:05:44.807 ************************************ 00:05:44.807 00:14:31 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.807 00:14:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.807 00:14:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.807 00:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:44.807 ************************************ 00:05:44.807 START TEST env_vtophys 00:05:44.807 ************************************ 00:05:44.807 00:14:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.807 EAL: lib.eal log level changed from notice to debug 00:05:44.807 EAL: Detected lcore 0 as core 0 on socket 0 00:05:44.807 EAL: Detected lcore 1 as core 0 on socket 0 00:05:44.807 EAL: Detected lcore 2 as core 0 on socket 0 00:05:44.807 EAL: Detected lcore 3 as core 0 on socket 0 00:05:44.807 EAL: Detected lcore 4 as core 0 on socket 0 00:05:44.807 EAL: Detected lcore 5 as core 0 on socket 0 00:05:44.807 EAL: Detected lcore 6 as core 0 on socket 0 00:05:44.807 EAL: Detected lcore 7 as core 0 on socket 0 00:05:44.807 EAL: Detected lcore 8 as core 0 on socket 0 00:05:44.807 EAL: Detected lcore 9 as core 0 on socket 0 00:05:44.807 EAL: Maximum logical cores by configuration: 128 00:05:44.807 EAL: Detected CPU lcores: 10 00:05:44.807 EAL: Detected NUMA nodes: 1 00:05:44.807 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:44.807 EAL: Detected shared linkage of DPDK 00:05:44.807 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:44.807 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:44.807 EAL: Registered [vdev] bus. 00:05:44.807 EAL: bus.vdev log level changed from disabled to notice 00:05:44.807 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:44.807 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:44.807 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:44.807 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:44.807 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:44.807 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:44.807 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:44.807 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:44.807 EAL: No shared files mode enabled, IPC will be disabled 00:05:44.807 EAL: No shared files mode enabled, IPC is disabled 00:05:44.807 EAL: Selected IOVA mode 'PA' 00:05:44.807 EAL: Probing VFIO support... 00:05:44.807 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.807 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:44.807 EAL: Ask a virtual area of 0x2e000 bytes 00:05:44.807 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:44.807 EAL: Setting up physically contiguous memory... 00:05:44.807 EAL: Setting maximum number of open files to 524288 00:05:44.807 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:44.807 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:44.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.807 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:44.807 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.807 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:44.807 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:44.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.807 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:44.807 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.807 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:44.807 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:44.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.807 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:44.807 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.807 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:44.807 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:44.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.807 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:44.807 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.807 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:44.807 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:44.807 EAL: Hugepages will be freed exactly as allocated. 00:05:44.807 EAL: No shared files mode enabled, IPC is disabled 00:05:44.807 EAL: No shared files mode enabled, IPC is disabled 00:05:44.807 EAL: TSC frequency is ~2200000 KHz 00:05:44.807 EAL: Main lcore 0 is ready (tid=7f1c21015a00;cpuset=[0]) 00:05:44.807 EAL: Trying to obtain current memory policy. 00:05:44.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.807 EAL: Restoring previous memory policy: 0 00:05:44.807 EAL: request: mp_malloc_sync 00:05:44.807 EAL: No shared files mode enabled, IPC is disabled 00:05:44.807 EAL: Heap on socket 0 was expanded by 2MB 00:05:44.807 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.807 EAL: No shared files mode enabled, IPC is disabled 00:05:44.807 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:44.807 EAL: Mem event callback 'spdk:(nil)' registered 00:05:44.808 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:45.067 00:05:45.067 00:05:45.067 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.067 http://cunit.sourceforge.net/ 00:05:45.067 00:05:45.067 00:05:45.067 Suite: components_suite 00:05:45.067 Test: vtophys_malloc_test ...passed 00:05:45.067 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:45.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.067 EAL: Restoring previous memory policy: 4 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was expanded by 4MB 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was shrunk by 4MB 00:05:45.067 EAL: Trying to obtain current memory policy. 00:05:45.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.067 EAL: Restoring previous memory policy: 4 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was expanded by 6MB 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was shrunk by 6MB 00:05:45.067 EAL: Trying to obtain current memory policy. 00:05:45.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.067 EAL: Restoring previous memory policy: 4 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was expanded by 10MB 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was shrunk by 10MB 00:05:45.067 EAL: Trying to obtain current memory policy. 00:05:45.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.067 EAL: Restoring previous memory policy: 4 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was expanded by 18MB 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was shrunk by 18MB 00:05:45.067 EAL: Trying to obtain current memory policy. 00:05:45.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.067 EAL: Restoring previous memory policy: 4 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was expanded by 34MB 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was shrunk by 34MB 00:05:45.067 EAL: Trying to obtain current memory policy. 00:05:45.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.067 EAL: Restoring previous memory policy: 4 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was expanded by 66MB 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was shrunk by 66MB 00:05:45.067 EAL: Trying to obtain current memory policy. 00:05:45.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.067 EAL: Restoring previous memory policy: 4 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was expanded by 130MB 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was shrunk by 130MB 00:05:45.067 EAL: Trying to obtain current memory policy. 00:05:45.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.067 EAL: Restoring previous memory policy: 4 00:05:45.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.067 EAL: request: mp_malloc_sync 00:05:45.067 EAL: No shared files mode enabled, IPC is disabled 00:05:45.067 EAL: Heap on socket 0 was expanded by 258MB 00:05:45.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.326 EAL: request: mp_malloc_sync 00:05:45.326 EAL: No shared files mode enabled, IPC is disabled 00:05:45.326 EAL: Heap on socket 0 was shrunk by 258MB 00:05:45.326 EAL: Trying to obtain current memory policy. 00:05:45.326 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.326 EAL: Restoring previous memory policy: 4 00:05:45.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.326 EAL: request: mp_malloc_sync 00:05:45.326 EAL: No shared files mode enabled, IPC is disabled 00:05:45.326 EAL: Heap on socket 0 was expanded by 514MB 00:05:45.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.585 EAL: request: mp_malloc_sync 00:05:45.585 EAL: No shared files mode enabled, IPC is disabled 00:05:45.585 EAL: Heap on socket 0 was shrunk by 514MB 00:05:45.585 EAL: Trying to obtain current memory policy. 00:05:45.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.842 EAL: Restoring previous memory policy: 4 00:05:45.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.842 EAL: request: mp_malloc_sync 00:05:45.842 EAL: No shared files mode enabled, IPC is disabled 00:05:45.842 EAL: Heap on socket 0 was expanded by 1026MB 00:05:46.098 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.356 passed 00:05:46.356 00:05:46.356 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.356 suites 1 1 n/a 0 0 00:05:46.356 tests 2 2 2 0 0 00:05:46.356 asserts 5120 5120 5120 0 n/a 00:05:46.356 00:05:46.356 Elapsed time = 1.337 seconds 00:05:46.356 EAL: request: mp_malloc_sync 00:05:46.356 EAL: No shared files mode enabled, IPC is disabled 00:05:46.356 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:46.356 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.356 EAL: request: mp_malloc_sync 00:05:46.356 EAL: No shared files mode enabled, IPC is disabled 00:05:46.356 EAL: Heap on socket 0 was shrunk by 2MB 00:05:46.356 EAL: No shared files mode enabled, IPC is disabled 00:05:46.356 EAL: No shared files mode enabled, IPC is disabled 00:05:46.356 EAL: No shared files mode enabled, IPC is disabled 00:05:46.356 00:05:46.356 real 0m1.537s 00:05:46.356 user 0m0.852s 00:05:46.356 sys 0m0.551s 00:05:46.356 00:14:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.356 00:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:46.356 ************************************ 00:05:46.356 END TEST env_vtophys 00:05:46.356 ************************************ 00:05:46.356 00:14:33 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:46.356 00:14:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.356 00:14:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.356 00:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:46.356 ************************************ 00:05:46.356 START TEST env_pci 00:05:46.356 ************************************ 00:05:46.356 00:14:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:46.356 00:05:46.356 00:05:46.356 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.356 http://cunit.sourceforge.net/ 00:05:46.356 00:05:46.356 00:05:46.356 Suite: pci 00:05:46.356 Test: pci_hook ...[2024-07-13 00:14:33.501314] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67429 has claimed it 00:05:46.356 passed 00:05:46.356 00:05:46.356 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.356 suites 1 1 n/a 0 0 00:05:46.356 tests 1 1 1 0 0 00:05:46.356 asserts 25 25 25 0 n/a 00:05:46.356 00:05:46.356 Elapsed time = 0.002 seconds 00:05:46.356 EAL: Cannot find device (10000:00:01.0) 00:05:46.356 EAL: Failed to attach device on primary process 00:05:46.356 00:05:46.356 real 0m0.022s 00:05:46.356 user 0m0.010s 00:05:46.356 sys 0m0.012s 00:05:46.356 00:14:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.356 00:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:46.356 ************************************ 00:05:46.356 END TEST env_pci 00:05:46.356 ************************************ 00:05:46.356 00:14:33 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:46.356 00:14:33 -- env/env.sh@15 -- # uname 00:05:46.356 00:14:33 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:46.356 00:14:33 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:46.356 00:14:33 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:46.356 00:14:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:46.356 00:14:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.356 00:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:46.356 ************************************ 00:05:46.356 START TEST env_dpdk_post_init 00:05:46.356 ************************************ 00:05:46.356 00:14:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:46.614 EAL: Detected CPU lcores: 10 00:05:46.614 EAL: Detected NUMA nodes: 1 00:05:46.614 EAL: Detected shared linkage of DPDK 00:05:46.614 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.614 EAL: Selected IOVA mode 'PA' 00:05:46.614 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.614 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:46.614 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:46.614 Starting DPDK initialization... 00:05:46.614 Starting SPDK post initialization... 00:05:46.614 SPDK NVMe probe 00:05:46.614 Attaching to 0000:00:06.0 00:05:46.614 Attaching to 0000:00:07.0 00:05:46.614 Attached to 0000:00:06.0 00:05:46.614 Attached to 0000:00:07.0 00:05:46.614 Cleaning up... 00:05:46.614 00:05:46.614 real 0m0.177s 00:05:46.614 user 0m0.039s 00:05:46.614 sys 0m0.038s 00:05:46.614 00:14:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.614 00:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:46.614 ************************************ 00:05:46.614 END TEST env_dpdk_post_init 00:05:46.614 ************************************ 00:05:46.614 00:14:33 -- env/env.sh@26 -- # uname 00:05:46.614 00:14:33 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:46.614 00:14:33 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.614 00:14:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.614 00:14:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.614 00:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:46.614 ************************************ 00:05:46.614 START TEST env_mem_callbacks 00:05:46.614 ************************************ 00:05:46.614 00:14:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.614 EAL: Detected CPU lcores: 10 00:05:46.614 EAL: Detected NUMA nodes: 1 00:05:46.614 EAL: Detected shared linkage of DPDK 00:05:46.614 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.614 EAL: Selected IOVA mode 'PA' 00:05:46.872 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.872 00:05:46.872 00:05:46.872 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.872 http://cunit.sourceforge.net/ 00:05:46.872 00:05:46.872 00:05:46.872 Suite: memory 00:05:46.872 Test: test ... 00:05:46.872 register 0x200000200000 2097152 00:05:46.872 malloc 3145728 00:05:46.872 register 0x200000400000 4194304 00:05:46.872 buf 0x200000500000 len 3145728 PASSED 00:05:46.872 malloc 64 00:05:46.872 buf 0x2000004fff40 len 64 PASSED 00:05:46.872 malloc 4194304 00:05:46.872 register 0x200000800000 6291456 00:05:46.872 buf 0x200000a00000 len 4194304 PASSED 00:05:46.872 free 0x200000500000 3145728 00:05:46.872 free 0x2000004fff40 64 00:05:46.872 unregister 0x200000400000 4194304 PASSED 00:05:46.872 free 0x200000a00000 4194304 00:05:46.873 unregister 0x200000800000 6291456 PASSED 00:05:46.873 malloc 8388608 00:05:46.873 register 0x200000400000 10485760 00:05:46.873 buf 0x200000600000 len 8388608 PASSED 00:05:46.873 free 0x200000600000 8388608 00:05:46.873 unregister 0x200000400000 10485760 PASSED 00:05:46.873 passed 00:05:46.873 00:05:46.873 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.873 suites 1 1 n/a 0 0 00:05:46.873 tests 1 1 1 0 0 00:05:46.873 asserts 15 15 15 0 n/a 00:05:46.873 00:05:46.873 Elapsed time = 0.008 seconds 00:05:46.873 00:05:46.873 real 0m0.142s 00:05:46.873 user 0m0.016s 00:05:46.873 sys 0m0.025s 00:05:46.873 00:14:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.873 00:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:46.873 ************************************ 00:05:46.873 END TEST env_mem_callbacks 00:05:46.873 ************************************ 00:05:46.873 00:05:46.873 real 0m2.464s 00:05:46.873 user 0m1.239s 00:05:46.873 sys 0m0.866s 00:05:46.873 00:14:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.873 00:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:46.873 ************************************ 00:05:46.873 END TEST env 00:05:46.873 ************************************ 00:05:46.873 00:14:34 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.873 00:14:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.873 00:14:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.873 00:14:34 -- common/autotest_common.sh@10 -- # set +x 00:05:46.873 ************************************ 00:05:46.873 START TEST rpc 00:05:46.873 ************************************ 00:05:46.873 00:14:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:47.131 * Looking for test storage... 00:05:47.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.131 00:14:34 -- rpc/rpc.sh@65 -- # spdk_pid=67543 00:05:47.131 00:14:34 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.131 00:14:34 -- rpc/rpc.sh@67 -- # waitforlisten 67543 00:05:47.131 00:14:34 -- common/autotest_common.sh@819 -- # '[' -z 67543 ']' 00:05:47.131 00:14:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.131 00:14:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.131 00:14:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.131 00:14:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.131 00:14:34 -- common/autotest_common.sh@10 -- # set +x 00:05:47.131 00:14:34 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:47.131 [2024-07-13 00:14:34.183668] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:47.131 [2024-07-13 00:14:34.183788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67543 ] 00:05:47.131 [2024-07-13 00:14:34.324766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.390 [2024-07-13 00:14:34.398940] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.390 [2024-07-13 00:14:34.399125] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:47.390 [2024-07-13 00:14:34.399140] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67543' to capture a snapshot of events at runtime. 00:05:47.390 [2024-07-13 00:14:34.399148] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67543 for offline analysis/debug. 00:05:47.390 [2024-07-13 00:14:34.399176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.958 00:14:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.958 00:14:35 -- common/autotest_common.sh@852 -- # return 0 00:05:47.958 00:14:35 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.958 00:14:35 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.958 00:14:35 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:47.958 00:14:35 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:47.958 00:14:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.958 00:14:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.958 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:47.958 ************************************ 00:05:47.958 START TEST rpc_integrity 00:05:47.958 ************************************ 00:05:47.958 00:14:35 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:47.958 00:14:35 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.958 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.958 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:47.958 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.958 00:14:35 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.958 00:14:35 -- rpc/rpc.sh@13 -- # jq length 00:05:48.217 00:14:35 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.217 00:14:35 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.217 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.217 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.217 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.217 00:14:35 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:48.217 00:14:35 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.217 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.217 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.217 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.217 00:14:35 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.217 { 00:05:48.217 "aliases": [ 00:05:48.217 "b25f1c04-014a-4e10-9915-351d460592be" 00:05:48.217 ], 00:05:48.217 "assigned_rate_limits": { 00:05:48.217 "r_mbytes_per_sec": 0, 00:05:48.217 "rw_ios_per_sec": 0, 00:05:48.217 "rw_mbytes_per_sec": 0, 00:05:48.217 "w_mbytes_per_sec": 0 00:05:48.217 }, 00:05:48.217 "block_size": 512, 00:05:48.217 "claimed": false, 00:05:48.217 "driver_specific": {}, 00:05:48.217 "memory_domains": [ 00:05:48.217 { 00:05:48.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.217 "dma_device_type": 2 00:05:48.217 } 00:05:48.217 ], 00:05:48.217 "name": "Malloc0", 00:05:48.217 "num_blocks": 16384, 00:05:48.217 "product_name": "Malloc disk", 00:05:48.217 "supported_io_types": { 00:05:48.217 "abort": true, 00:05:48.217 "compare": false, 00:05:48.217 "compare_and_write": false, 00:05:48.217 "flush": true, 00:05:48.217 "nvme_admin": false, 00:05:48.217 "nvme_io": false, 00:05:48.217 "read": true, 00:05:48.217 "reset": true, 00:05:48.217 "unmap": true, 00:05:48.217 "write": true, 00:05:48.217 "write_zeroes": true 00:05:48.217 }, 00:05:48.217 "uuid": "b25f1c04-014a-4e10-9915-351d460592be", 00:05:48.217 "zoned": false 00:05:48.217 } 00:05:48.217 ]' 00:05:48.217 00:14:35 -- rpc/rpc.sh@17 -- # jq length 00:05:48.217 00:14:35 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.217 00:14:35 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:48.217 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.217 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.217 [2024-07-13 00:14:35.310919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:48.217 [2024-07-13 00:14:35.310974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.217 [2024-07-13 00:14:35.311019] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6880b0 00:05:48.217 [2024-07-13 00:14:35.311033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.217 [2024-07-13 00:14:35.312383] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.217 [2024-07-13 00:14:35.312411] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.217 Passthru0 00:05:48.217 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.217 00:14:35 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.217 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.217 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.217 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.217 00:14:35 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.217 { 00:05:48.217 "aliases": [ 00:05:48.217 "b25f1c04-014a-4e10-9915-351d460592be" 00:05:48.217 ], 00:05:48.217 "assigned_rate_limits": { 00:05:48.217 "r_mbytes_per_sec": 0, 00:05:48.217 "rw_ios_per_sec": 0, 00:05:48.217 "rw_mbytes_per_sec": 0, 00:05:48.217 "w_mbytes_per_sec": 0 00:05:48.217 }, 00:05:48.217 "block_size": 512, 00:05:48.217 "claim_type": "exclusive_write", 00:05:48.217 "claimed": true, 00:05:48.217 "driver_specific": {}, 00:05:48.217 "memory_domains": [ 00:05:48.217 { 00:05:48.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.217 "dma_device_type": 2 00:05:48.217 } 00:05:48.217 ], 00:05:48.217 "name": "Malloc0", 00:05:48.217 "num_blocks": 16384, 00:05:48.217 "product_name": "Malloc disk", 00:05:48.217 "supported_io_types": { 00:05:48.217 "abort": true, 00:05:48.217 "compare": false, 00:05:48.217 "compare_and_write": false, 00:05:48.217 "flush": true, 00:05:48.217 "nvme_admin": false, 00:05:48.217 "nvme_io": false, 00:05:48.217 "read": true, 00:05:48.217 "reset": true, 00:05:48.217 "unmap": true, 00:05:48.217 "write": true, 00:05:48.217 "write_zeroes": true 00:05:48.217 }, 00:05:48.217 "uuid": "b25f1c04-014a-4e10-9915-351d460592be", 00:05:48.217 "zoned": false 00:05:48.217 }, 00:05:48.217 { 00:05:48.217 "aliases": [ 00:05:48.217 "29852599-5109-5d13-9152-c909d645b6c5" 00:05:48.217 ], 00:05:48.217 "assigned_rate_limits": { 00:05:48.217 "r_mbytes_per_sec": 0, 00:05:48.217 "rw_ios_per_sec": 0, 00:05:48.217 "rw_mbytes_per_sec": 0, 00:05:48.217 "w_mbytes_per_sec": 0 00:05:48.217 }, 00:05:48.217 "block_size": 512, 00:05:48.217 "claimed": false, 00:05:48.217 "driver_specific": { 00:05:48.217 "passthru": { 00:05:48.217 "base_bdev_name": "Malloc0", 00:05:48.217 "name": "Passthru0" 00:05:48.217 } 00:05:48.217 }, 00:05:48.217 "memory_domains": [ 00:05:48.217 { 00:05:48.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.217 "dma_device_type": 2 00:05:48.217 } 00:05:48.217 ], 00:05:48.217 "name": "Passthru0", 00:05:48.217 "num_blocks": 16384, 00:05:48.217 "product_name": "passthru", 00:05:48.217 "supported_io_types": { 00:05:48.217 "abort": true, 00:05:48.217 "compare": false, 00:05:48.217 "compare_and_write": false, 00:05:48.217 "flush": true, 00:05:48.217 "nvme_admin": false, 00:05:48.217 "nvme_io": false, 00:05:48.217 "read": true, 00:05:48.217 "reset": true, 00:05:48.217 "unmap": true, 00:05:48.217 "write": true, 00:05:48.217 "write_zeroes": true 00:05:48.217 }, 00:05:48.217 "uuid": "29852599-5109-5d13-9152-c909d645b6c5", 00:05:48.217 "zoned": false 00:05:48.217 } 00:05:48.217 ]' 00:05:48.217 00:14:35 -- rpc/rpc.sh@21 -- # jq length 00:05:48.217 00:14:35 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.217 00:14:35 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.217 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.217 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.217 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.217 00:14:35 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:48.217 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.217 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.217 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.217 00:14:35 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.217 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.217 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.217 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.217 00:14:35 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.217 00:14:35 -- rpc/rpc.sh@26 -- # jq length 00:05:48.475 ************************************ 00:05:48.475 END TEST rpc_integrity 00:05:48.475 ************************************ 00:05:48.475 00:14:35 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.475 00:05:48.475 real 0m0.326s 00:05:48.475 user 0m0.216s 00:05:48.476 sys 0m0.035s 00:05:48.476 00:14:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.476 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 00:14:35 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:48.476 00:14:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.476 00:14:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.476 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 ************************************ 00:05:48.476 START TEST rpc_plugins 00:05:48.476 ************************************ 00:05:48.476 00:14:35 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:48.476 00:14:35 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:48.476 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.476 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.476 00:14:35 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:48.476 00:14:35 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:48.476 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.476 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.476 00:14:35 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:48.476 { 00:05:48.476 "aliases": [ 00:05:48.476 "4baf2946-18e4-4231-9e34-ef357c6bba27" 00:05:48.476 ], 00:05:48.476 "assigned_rate_limits": { 00:05:48.476 "r_mbytes_per_sec": 0, 00:05:48.476 "rw_ios_per_sec": 0, 00:05:48.476 "rw_mbytes_per_sec": 0, 00:05:48.476 "w_mbytes_per_sec": 0 00:05:48.476 }, 00:05:48.476 "block_size": 4096, 00:05:48.476 "claimed": false, 00:05:48.476 "driver_specific": {}, 00:05:48.476 "memory_domains": [ 00:05:48.476 { 00:05:48.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.476 "dma_device_type": 2 00:05:48.476 } 00:05:48.476 ], 00:05:48.476 "name": "Malloc1", 00:05:48.476 "num_blocks": 256, 00:05:48.476 "product_name": "Malloc disk", 00:05:48.476 "supported_io_types": { 00:05:48.476 "abort": true, 00:05:48.476 "compare": false, 00:05:48.476 "compare_and_write": false, 00:05:48.476 "flush": true, 00:05:48.476 "nvme_admin": false, 00:05:48.476 "nvme_io": false, 00:05:48.476 "read": true, 00:05:48.476 "reset": true, 00:05:48.476 "unmap": true, 00:05:48.476 "write": true, 00:05:48.476 "write_zeroes": true 00:05:48.476 }, 00:05:48.476 "uuid": "4baf2946-18e4-4231-9e34-ef357c6bba27", 00:05:48.476 "zoned": false 00:05:48.476 } 00:05:48.476 ]' 00:05:48.476 00:14:35 -- rpc/rpc.sh@32 -- # jq length 00:05:48.476 00:14:35 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:48.476 00:14:35 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:48.476 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.476 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.476 00:14:35 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:48.476 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.476 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.476 00:14:35 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:48.476 00:14:35 -- rpc/rpc.sh@36 -- # jq length 00:05:48.476 ************************************ 00:05:48.476 END TEST rpc_plugins 00:05:48.476 ************************************ 00:05:48.476 00:14:35 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:48.476 00:05:48.476 real 0m0.167s 00:05:48.476 user 0m0.112s 00:05:48.476 sys 0m0.016s 00:05:48.476 00:14:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.476 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.735 00:14:35 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:48.735 00:14:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.735 00:14:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.735 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.735 ************************************ 00:05:48.735 START TEST rpc_trace_cmd_test 00:05:48.735 ************************************ 00:05:48.735 00:14:35 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:48.735 00:14:35 -- rpc/rpc.sh@40 -- # local info 00:05:48.735 00:14:35 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:48.735 00:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.735 00:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:48.735 00:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.735 00:14:35 -- rpc/rpc.sh@42 -- # info='{ 00:05:48.735 "bdev": { 00:05:48.735 "mask": "0x8", 00:05:48.735 "tpoint_mask": "0xffffffffffffffff" 00:05:48.735 }, 00:05:48.735 "bdev_nvme": { 00:05:48.735 "mask": "0x4000", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "blobfs": { 00:05:48.735 "mask": "0x80", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "dsa": { 00:05:48.735 "mask": "0x200", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "ftl": { 00:05:48.735 "mask": "0x40", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "iaa": { 00:05:48.735 "mask": "0x1000", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "iscsi_conn": { 00:05:48.735 "mask": "0x2", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "nvme_pcie": { 00:05:48.735 "mask": "0x800", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "nvme_tcp": { 00:05:48.735 "mask": "0x2000", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "nvmf_rdma": { 00:05:48.735 "mask": "0x10", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "nvmf_tcp": { 00:05:48.735 "mask": "0x20", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "scsi": { 00:05:48.735 "mask": "0x4", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "thread": { 00:05:48.735 "mask": "0x400", 00:05:48.735 "tpoint_mask": "0x0" 00:05:48.735 }, 00:05:48.735 "tpoint_group_mask": "0x8", 00:05:48.735 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67543" 00:05:48.735 }' 00:05:48.735 00:14:35 -- rpc/rpc.sh@43 -- # jq length 00:05:48.735 00:14:35 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:48.735 00:14:35 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:48.735 00:14:35 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:48.735 00:14:35 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:48.735 00:14:35 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:48.735 00:14:35 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:48.993 00:14:35 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:48.993 00:14:35 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:48.993 ************************************ 00:05:48.993 END TEST rpc_trace_cmd_test 00:05:48.993 ************************************ 00:05:48.993 00:14:36 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:48.993 00:05:48.993 real 0m0.282s 00:05:48.993 user 0m0.233s 00:05:48.993 sys 0m0.036s 00:05:48.993 00:14:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.993 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:48.993 00:14:36 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:48.993 00:14:36 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:48.993 00:14:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.993 00:14:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.993 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:48.993 ************************************ 00:05:48.993 START TEST go_rpc 00:05:48.993 ************************************ 00:05:48.993 00:14:36 -- common/autotest_common.sh@1104 -- # go_rpc 00:05:48.993 00:14:36 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:48.993 00:14:36 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:48.993 00:14:36 -- rpc/rpc.sh@52 -- # jq length 00:05:48.993 00:14:36 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:48.993 00:14:36 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.993 00:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.993 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:48.993 00:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.993 00:14:36 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:48.993 00:14:36 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:48.994 00:14:36 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["f4170220-4d4a-4e95-bf06-74bda273d6c9"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"f4170220-4d4a-4e95-bf06-74bda273d6c9","zoned":false}]' 00:05:48.994 00:14:36 -- rpc/rpc.sh@57 -- # jq length 00:05:49.252 00:14:36 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:49.252 00:14:36 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:49.252 00:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.252 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.252 00:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.253 00:14:36 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:49.253 00:14:36 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:49.253 00:14:36 -- rpc/rpc.sh@61 -- # jq length 00:05:49.253 ************************************ 00:05:49.253 END TEST go_rpc 00:05:49.253 ************************************ 00:05:49.253 00:14:36 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:49.253 00:05:49.253 real 0m0.221s 00:05:49.253 user 0m0.151s 00:05:49.253 sys 0m0.039s 00:05:49.253 00:14:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.253 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.253 00:14:36 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:49.253 00:14:36 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:49.253 00:14:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.253 00:14:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.253 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.253 ************************************ 00:05:49.253 START TEST rpc_daemon_integrity 00:05:49.253 ************************************ 00:05:49.253 00:14:36 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:49.253 00:14:36 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.253 00:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.253 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.253 00:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.253 00:14:36 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.253 00:14:36 -- rpc/rpc.sh@13 -- # jq length 00:05:49.253 00:14:36 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.253 00:14:36 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.253 00:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.253 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.253 00:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.253 00:14:36 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:49.253 00:14:36 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.253 00:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.253 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.253 00:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.253 00:14:36 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.253 { 00:05:49.253 "aliases": [ 00:05:49.253 "567b0bb3-922d-4ff8-bbd7-d5f2cb0911d1" 00:05:49.253 ], 00:05:49.253 "assigned_rate_limits": { 00:05:49.253 "r_mbytes_per_sec": 0, 00:05:49.253 "rw_ios_per_sec": 0, 00:05:49.253 "rw_mbytes_per_sec": 0, 00:05:49.253 "w_mbytes_per_sec": 0 00:05:49.253 }, 00:05:49.253 "block_size": 512, 00:05:49.253 "claimed": false, 00:05:49.253 "driver_specific": {}, 00:05:49.253 "memory_domains": [ 00:05:49.253 { 00:05:49.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.253 "dma_device_type": 2 00:05:49.253 } 00:05:49.253 ], 00:05:49.253 "name": "Malloc3", 00:05:49.253 "num_blocks": 16384, 00:05:49.253 "product_name": "Malloc disk", 00:05:49.253 "supported_io_types": { 00:05:49.253 "abort": true, 00:05:49.253 "compare": false, 00:05:49.253 "compare_and_write": false, 00:05:49.253 "flush": true, 00:05:49.253 "nvme_admin": false, 00:05:49.253 "nvme_io": false, 00:05:49.253 "read": true, 00:05:49.253 "reset": true, 00:05:49.253 "unmap": true, 00:05:49.253 "write": true, 00:05:49.253 "write_zeroes": true 00:05:49.253 }, 00:05:49.253 "uuid": "567b0bb3-922d-4ff8-bbd7-d5f2cb0911d1", 00:05:49.253 "zoned": false 00:05:49.253 } 00:05:49.253 ]' 00:05:49.253 00:14:36 -- rpc/rpc.sh@17 -- # jq length 00:05:49.512 00:14:36 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.512 00:14:36 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:49.512 00:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.512 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.512 [2024-07-13 00:14:36.511421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:49.512 [2024-07-13 00:14:36.511480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.512 [2024-07-13 00:14:36.511499] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x827b50 00:05:49.512 [2024-07-13 00:14:36.511523] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.512 [2024-07-13 00:14:36.512921] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.512 [2024-07-13 00:14:36.512958] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.512 Passthru0 00:05:49.512 00:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.512 00:14:36 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.512 00:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.512 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.512 00:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.512 00:14:36 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.512 { 00:05:49.512 "aliases": [ 00:05:49.512 "567b0bb3-922d-4ff8-bbd7-d5f2cb0911d1" 00:05:49.512 ], 00:05:49.512 "assigned_rate_limits": { 00:05:49.512 "r_mbytes_per_sec": 0, 00:05:49.512 "rw_ios_per_sec": 0, 00:05:49.512 "rw_mbytes_per_sec": 0, 00:05:49.512 "w_mbytes_per_sec": 0 00:05:49.512 }, 00:05:49.512 "block_size": 512, 00:05:49.512 "claim_type": "exclusive_write", 00:05:49.512 "claimed": true, 00:05:49.512 "driver_specific": {}, 00:05:49.512 "memory_domains": [ 00:05:49.512 { 00:05:49.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.512 "dma_device_type": 2 00:05:49.512 } 00:05:49.512 ], 00:05:49.512 "name": "Malloc3", 00:05:49.512 "num_blocks": 16384, 00:05:49.512 "product_name": "Malloc disk", 00:05:49.512 "supported_io_types": { 00:05:49.512 "abort": true, 00:05:49.512 "compare": false, 00:05:49.512 "compare_and_write": false, 00:05:49.512 "flush": true, 00:05:49.512 "nvme_admin": false, 00:05:49.512 "nvme_io": false, 00:05:49.512 "read": true, 00:05:49.512 "reset": true, 00:05:49.512 "unmap": true, 00:05:49.512 "write": true, 00:05:49.512 "write_zeroes": true 00:05:49.512 }, 00:05:49.512 "uuid": "567b0bb3-922d-4ff8-bbd7-d5f2cb0911d1", 00:05:49.512 "zoned": false 00:05:49.512 }, 00:05:49.512 { 00:05:49.512 "aliases": [ 00:05:49.512 "4b8fd314-026e-5193-a2ed-20d4484bff63" 00:05:49.512 ], 00:05:49.512 "assigned_rate_limits": { 00:05:49.512 "r_mbytes_per_sec": 0, 00:05:49.512 "rw_ios_per_sec": 0, 00:05:49.512 "rw_mbytes_per_sec": 0, 00:05:49.512 "w_mbytes_per_sec": 0 00:05:49.512 }, 00:05:49.512 "block_size": 512, 00:05:49.512 "claimed": false, 00:05:49.512 "driver_specific": { 00:05:49.512 "passthru": { 00:05:49.512 "base_bdev_name": "Malloc3", 00:05:49.512 "name": "Passthru0" 00:05:49.512 } 00:05:49.512 }, 00:05:49.512 "memory_domains": [ 00:05:49.512 { 00:05:49.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.512 "dma_device_type": 2 00:05:49.512 } 00:05:49.512 ], 00:05:49.512 "name": "Passthru0", 00:05:49.512 "num_blocks": 16384, 00:05:49.512 "product_name": "passthru", 00:05:49.512 "supported_io_types": { 00:05:49.512 "abort": true, 00:05:49.512 "compare": false, 00:05:49.512 "compare_and_write": false, 00:05:49.512 "flush": true, 00:05:49.512 "nvme_admin": false, 00:05:49.512 "nvme_io": false, 00:05:49.512 "read": true, 00:05:49.512 "reset": true, 00:05:49.512 "unmap": true, 00:05:49.512 "write": true, 00:05:49.512 "write_zeroes": true 00:05:49.512 }, 00:05:49.512 "uuid": "4b8fd314-026e-5193-a2ed-20d4484bff63", 00:05:49.512 "zoned": false 00:05:49.512 } 00:05:49.512 ]' 00:05:49.512 00:14:36 -- rpc/rpc.sh@21 -- # jq length 00:05:49.512 00:14:36 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.512 00:14:36 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.512 00:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.512 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.512 00:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.513 00:14:36 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:49.513 00:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.513 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.513 00:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.513 00:14:36 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.513 00:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:49.513 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.513 00:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:49.513 00:14:36 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.513 00:14:36 -- rpc/rpc.sh@26 -- # jq length 00:05:49.513 ************************************ 00:05:49.513 END TEST rpc_daemon_integrity 00:05:49.513 ************************************ 00:05:49.513 00:14:36 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.513 00:05:49.513 real 0m0.321s 00:05:49.513 user 0m0.217s 00:05:49.513 sys 0m0.040s 00:05:49.513 00:14:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.513 00:14:36 -- common/autotest_common.sh@10 -- # set +x 00:05:49.513 00:14:36 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:49.513 00:14:36 -- rpc/rpc.sh@84 -- # killprocess 67543 00:05:49.513 00:14:36 -- common/autotest_common.sh@926 -- # '[' -z 67543 ']' 00:05:49.513 00:14:36 -- common/autotest_common.sh@930 -- # kill -0 67543 00:05:49.513 00:14:36 -- common/autotest_common.sh@931 -- # uname 00:05:49.513 00:14:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:49.513 00:14:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67543 00:05:49.771 killing process with pid 67543 00:05:49.771 00:14:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:49.771 00:14:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:49.771 00:14:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67543' 00:05:49.771 00:14:36 -- common/autotest_common.sh@945 -- # kill 67543 00:05:49.771 00:14:36 -- common/autotest_common.sh@950 -- # wait 67543 00:05:50.124 ************************************ 00:05:50.124 END TEST rpc 00:05:50.124 ************************************ 00:05:50.124 00:05:50.124 real 0m3.127s 00:05:50.124 user 0m4.111s 00:05:50.124 sys 0m0.771s 00:05:50.124 00:14:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.124 00:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:50.124 00:14:37 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:50.124 00:14:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.124 00:14:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.124 00:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:50.124 ************************************ 00:05:50.124 START TEST rpc_client 00:05:50.124 ************************************ 00:05:50.124 00:14:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:50.124 * Looking for test storage... 00:05:50.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:50.124 00:14:37 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:50.124 OK 00:05:50.124 00:14:37 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:50.124 00:05:50.124 real 0m0.104s 00:05:50.124 user 0m0.047s 00:05:50.124 sys 0m0.063s 00:05:50.124 00:14:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.124 ************************************ 00:05:50.124 00:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:50.124 END TEST rpc_client 00:05:50.124 ************************************ 00:05:50.383 00:14:37 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:50.383 00:14:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.383 00:14:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.383 00:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:50.383 ************************************ 00:05:50.383 START TEST json_config 00:05:50.383 ************************************ 00:05:50.383 00:14:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:50.383 00:14:37 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:50.383 00:14:37 -- nvmf/common.sh@7 -- # uname -s 00:05:50.383 00:14:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.383 00:14:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.383 00:14:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.383 00:14:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.383 00:14:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.383 00:14:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.383 00:14:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.383 00:14:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.383 00:14:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.383 00:14:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.383 00:14:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:05:50.383 00:14:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:05:50.383 00:14:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.383 00:14:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.383 00:14:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.383 00:14:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.383 00:14:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.383 00:14:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.383 00:14:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.384 00:14:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.384 00:14:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.384 00:14:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.384 00:14:37 -- paths/export.sh@5 -- # export PATH 00:05:50.384 00:14:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.384 00:14:37 -- nvmf/common.sh@46 -- # : 0 00:05:50.384 00:14:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:50.384 00:14:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:50.384 00:14:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:50.384 00:14:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.384 00:14:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.384 00:14:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:50.384 00:14:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:50.384 00:14:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:50.384 00:14:37 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:50.384 00:14:37 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:50.384 00:14:37 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:50.384 00:14:37 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:50.384 00:14:37 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:50.384 00:14:37 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:50.384 00:14:37 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:50.384 00:14:37 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:50.384 00:14:37 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:50.384 00:14:37 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:50.384 00:14:37 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:50.384 00:14:37 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:50.384 00:14:37 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:50.384 00:14:37 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.384 00:14:37 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:50.384 INFO: JSON configuration test init 00:05:50.384 00:14:37 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:50.384 00:14:37 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:50.384 00:14:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:50.384 00:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:50.384 00:14:37 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:50.384 00:14:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:50.384 00:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:50.384 Waiting for target to run... 00:05:50.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.384 00:14:37 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:50.384 00:14:37 -- json_config/json_config.sh@98 -- # local app=target 00:05:50.384 00:14:37 -- json_config/json_config.sh@99 -- # shift 00:05:50.384 00:14:37 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:50.384 00:14:37 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:50.384 00:14:37 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:50.384 00:14:37 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:50.384 00:14:37 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:50.384 00:14:37 -- json_config/json_config.sh@111 -- # app_pid[$app]=67843 00:05:50.384 00:14:37 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:50.384 00:14:37 -- json_config/json_config.sh@114 -- # waitforlisten 67843 /var/tmp/spdk_tgt.sock 00:05:50.384 00:14:37 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:50.384 00:14:37 -- common/autotest_common.sh@819 -- # '[' -z 67843 ']' 00:05:50.384 00:14:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.384 00:14:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.384 00:14:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.384 00:14:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.384 00:14:37 -- common/autotest_common.sh@10 -- # set +x 00:05:50.384 [2024-07-13 00:14:37.523997] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:50.384 [2024-07-13 00:14:37.524352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67843 ] 00:05:50.951 [2024-07-13 00:14:37.958620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.951 [2024-07-13 00:14:38.021058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.951 [2024-07-13 00:14:38.021412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.520 00:14:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.520 00:14:38 -- common/autotest_common.sh@852 -- # return 0 00:05:51.520 00:14:38 -- json_config/json_config.sh@115 -- # echo '' 00:05:51.520 00:05:51.520 00:14:38 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:51.520 00:14:38 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:51.520 00:14:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:51.520 00:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:51.520 00:14:38 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:51.520 00:14:38 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:51.520 00:14:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:51.520 00:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:51.520 00:14:38 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:51.520 00:14:38 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:51.520 00:14:38 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:52.087 00:14:39 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:52.087 00:14:39 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:52.087 00:14:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:52.087 00:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:52.087 00:14:39 -- json_config/json_config.sh@48 -- # local ret=0 00:05:52.087 00:14:39 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:52.087 00:14:39 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:52.087 00:14:39 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:52.087 00:14:39 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:52.087 00:14:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:52.087 00:14:39 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:52.087 00:14:39 -- json_config/json_config.sh@51 -- # local get_types 00:05:52.087 00:14:39 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:52.087 00:14:39 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:52.087 00:14:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.087 00:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:52.346 00:14:39 -- json_config/json_config.sh@58 -- # return 0 00:05:52.346 00:14:39 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:52.346 00:14:39 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:52.346 00:14:39 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:52.346 00:14:39 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:52.346 00:14:39 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:52.346 00:14:39 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:52.346 00:14:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:52.346 00:14:39 -- common/autotest_common.sh@10 -- # set +x 00:05:52.346 00:14:39 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:52.346 00:14:39 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:52.346 00:14:39 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:52.346 00:14:39 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:52.346 00:14:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:52.605 MallocForNvmf0 00:05:52.605 00:14:39 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:52.605 00:14:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:52.864 MallocForNvmf1 00:05:52.864 00:14:39 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:52.864 00:14:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:53.123 [2024-07-13 00:14:40.146423] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.123 00:14:40 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:53.123 00:14:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:53.382 00:14:40 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:53.382 00:14:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:53.640 00:14:40 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:53.640 00:14:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:53.640 00:14:40 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:53.640 00:14:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:53.898 [2024-07-13 00:14:41.054986] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:53.899 00:14:41 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:53.899 00:14:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.899 00:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:53.899 00:14:41 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:53.899 00:14:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.899 00:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:54.158 00:14:41 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:54.158 00:14:41 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:54.158 00:14:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:54.416 MallocBdevForConfigChangeCheck 00:05:54.416 00:14:41 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:54.416 00:14:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:54.416 00:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:54.416 00:14:41 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:54.416 00:14:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.675 INFO: shutting down applications... 00:05:54.675 00:14:41 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:54.675 00:14:41 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:54.675 00:14:41 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:54.675 00:14:41 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:54.675 00:14:41 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:54.933 Calling clear_iscsi_subsystem 00:05:54.933 Calling clear_nvmf_subsystem 00:05:54.933 Calling clear_nbd_subsystem 00:05:54.933 Calling clear_ublk_subsystem 00:05:54.933 Calling clear_vhost_blk_subsystem 00:05:54.933 Calling clear_vhost_scsi_subsystem 00:05:54.933 Calling clear_scheduler_subsystem 00:05:54.933 Calling clear_bdev_subsystem 00:05:54.933 Calling clear_accel_subsystem 00:05:54.933 Calling clear_vmd_subsystem 00:05:54.933 Calling clear_sock_subsystem 00:05:54.933 Calling clear_iobuf_subsystem 00:05:54.933 00:14:42 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:54.933 00:14:42 -- json_config/json_config.sh@396 -- # count=100 00:05:54.933 00:14:42 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:54.933 00:14:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.933 00:14:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:54.933 00:14:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:55.500 00:14:42 -- json_config/json_config.sh@398 -- # break 00:05:55.500 00:14:42 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:55.500 00:14:42 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:55.500 00:14:42 -- json_config/json_config.sh@120 -- # local app=target 00:05:55.500 00:14:42 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:55.500 00:14:42 -- json_config/json_config.sh@124 -- # [[ -n 67843 ]] 00:05:55.500 00:14:42 -- json_config/json_config.sh@127 -- # kill -SIGINT 67843 00:05:55.500 00:14:42 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:55.500 00:14:42 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:55.500 00:14:42 -- json_config/json_config.sh@130 -- # kill -0 67843 00:05:55.500 00:14:42 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:56.068 00:14:43 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:56.068 00:14:43 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:56.068 00:14:43 -- json_config/json_config.sh@130 -- # kill -0 67843 00:05:56.068 00:14:43 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:56.068 00:14:43 -- json_config/json_config.sh@132 -- # break 00:05:56.068 00:14:43 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:56.068 SPDK target shutdown done 00:05:56.068 00:14:43 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:56.068 INFO: relaunching applications... 00:05:56.068 00:14:43 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:56.068 00:14:43 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.068 00:14:43 -- json_config/json_config.sh@98 -- # local app=target 00:05:56.068 00:14:43 -- json_config/json_config.sh@99 -- # shift 00:05:56.068 00:14:43 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:56.068 00:14:43 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:56.068 00:14:43 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:56.068 00:14:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:56.068 00:14:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:56.068 00:14:43 -- json_config/json_config.sh@111 -- # app_pid[$app]=68112 00:05:56.068 00:14:43 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.068 Waiting for target to run... 00:05:56.068 00:14:43 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:56.068 00:14:43 -- json_config/json_config.sh@114 -- # waitforlisten 68112 /var/tmp/spdk_tgt.sock 00:05:56.068 00:14:43 -- common/autotest_common.sh@819 -- # '[' -z 68112 ']' 00:05:56.068 00:14:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.068 00:14:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.068 00:14:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.068 00:14:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.068 00:14:43 -- common/autotest_common.sh@10 -- # set +x 00:05:56.068 [2024-07-13 00:14:43.114901] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:56.068 [2024-07-13 00:14:43.115005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68112 ] 00:05:56.633 [2024-07-13 00:14:43.563357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.633 [2024-07-13 00:14:43.640041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.633 [2024-07-13 00:14:43.640269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.891 [2024-07-13 00:14:43.955618] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.891 [2024-07-13 00:14:43.987674] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:56.891 00:14:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.891 00:14:44 -- common/autotest_common.sh@852 -- # return 0 00:05:56.891 00:14:44 -- json_config/json_config.sh@115 -- # echo '' 00:05:56.891 00:05:56.891 00:14:44 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:56.891 INFO: Checking if target configuration is the same... 00:05:56.891 00:14:44 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:56.891 00:14:44 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:56.891 00:14:44 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.891 00:14:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.891 + '[' 2 -ne 2 ']' 00:05:56.891 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:56.891 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:56.891 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:56.891 +++ basename /dev/fd/62 00:05:56.891 ++ mktemp /tmp/62.XXX 00:05:56.891 + tmp_file_1=/tmp/62.3kV 00:05:56.891 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.891 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:56.891 + tmp_file_2=/tmp/spdk_tgt_config.json.hYQ 00:05:56.891 + ret=0 00:05:56.891 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:57.457 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:57.457 + diff -u /tmp/62.3kV /tmp/spdk_tgt_config.json.hYQ 00:05:57.457 INFO: JSON config files are the same 00:05:57.457 + echo 'INFO: JSON config files are the same' 00:05:57.457 + rm /tmp/62.3kV /tmp/spdk_tgt_config.json.hYQ 00:05:57.457 + exit 0 00:05:57.457 00:14:44 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:57.457 INFO: changing configuration and checking if this can be detected... 00:05:57.457 00:14:44 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:57.457 00:14:44 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:57.457 00:14:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:57.714 00:14:44 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.714 00:14:44 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:57.714 00:14:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:57.714 + '[' 2 -ne 2 ']' 00:05:57.714 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:57.714 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:57.714 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:57.714 +++ basename /dev/fd/62 00:05:57.714 ++ mktemp /tmp/62.XXX 00:05:57.714 + tmp_file_1=/tmp/62.Dll 00:05:57.714 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.714 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:57.714 + tmp_file_2=/tmp/spdk_tgt_config.json.gDs 00:05:57.714 + ret=0 00:05:57.714 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:57.971 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:57.971 + diff -u /tmp/62.Dll /tmp/spdk_tgt_config.json.gDs 00:05:57.971 + ret=1 00:05:57.971 + echo '=== Start of file: /tmp/62.Dll ===' 00:05:57.971 + cat /tmp/62.Dll 00:05:57.971 + echo '=== End of file: /tmp/62.Dll ===' 00:05:57.971 + echo '' 00:05:57.971 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gDs ===' 00:05:57.971 + cat /tmp/spdk_tgt_config.json.gDs 00:05:57.971 + echo '=== End of file: /tmp/spdk_tgt_config.json.gDs ===' 00:05:57.971 + echo '' 00:05:57.971 + rm /tmp/62.Dll /tmp/spdk_tgt_config.json.gDs 00:05:57.971 + exit 1 00:05:57.971 INFO: configuration change detected. 00:05:57.971 00:14:45 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:57.971 00:14:45 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:57.971 00:14:45 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:57.971 00:14:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:57.971 00:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:57.971 00:14:45 -- json_config/json_config.sh@360 -- # local ret=0 00:05:57.971 00:14:45 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:57.971 00:14:45 -- json_config/json_config.sh@370 -- # [[ -n 68112 ]] 00:05:57.971 00:14:45 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:57.971 00:14:45 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:57.971 00:14:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:57.971 00:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:57.971 00:14:45 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:58.229 00:14:45 -- json_config/json_config.sh@246 -- # uname -s 00:05:58.229 00:14:45 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:58.229 00:14:45 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:58.229 00:14:45 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:58.229 00:14:45 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:58.229 00:14:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:58.229 00:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:58.229 00:14:45 -- json_config/json_config.sh@376 -- # killprocess 68112 00:05:58.229 00:14:45 -- common/autotest_common.sh@926 -- # '[' -z 68112 ']' 00:05:58.229 00:14:45 -- common/autotest_common.sh@930 -- # kill -0 68112 00:05:58.229 00:14:45 -- common/autotest_common.sh@931 -- # uname 00:05:58.229 00:14:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:58.229 00:14:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68112 00:05:58.229 00:14:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:58.229 00:14:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:58.229 killing process with pid 68112 00:05:58.229 00:14:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68112' 00:05:58.229 00:14:45 -- common/autotest_common.sh@945 -- # kill 68112 00:05:58.229 00:14:45 -- common/autotest_common.sh@950 -- # wait 68112 00:05:58.486 00:14:45 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:58.486 00:14:45 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:58.486 00:14:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:58.486 00:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:58.486 00:14:45 -- json_config/json_config.sh@381 -- # return 0 00:05:58.486 INFO: Success 00:05:58.486 00:14:45 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:58.486 00:05:58.486 real 0m8.242s 00:05:58.486 user 0m11.644s 00:05:58.486 sys 0m1.902s 00:05:58.486 00:14:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.486 00:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:58.486 ************************************ 00:05:58.486 END TEST json_config 00:05:58.486 ************************************ 00:05:58.486 00:14:45 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:58.486 00:14:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.486 00:14:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.486 00:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:58.486 ************************************ 00:05:58.486 START TEST json_config_extra_key 00:05:58.486 ************************************ 00:05:58.486 00:14:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:58.486 00:14:45 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:58.486 00:14:45 -- nvmf/common.sh@7 -- # uname -s 00:05:58.486 00:14:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.486 00:14:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.486 00:14:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.486 00:14:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.486 00:14:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.486 00:14:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.486 00:14:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.486 00:14:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.486 00:14:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.486 00:14:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.745 00:14:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:05:58.745 00:14:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:05:58.745 00:14:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.745 00:14:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.745 00:14:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.745 00:14:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.745 00:14:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.745 00:14:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.745 00:14:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.745 00:14:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.745 00:14:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.745 00:14:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.745 00:14:45 -- paths/export.sh@5 -- # export PATH 00:05:58.745 00:14:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.745 00:14:45 -- nvmf/common.sh@46 -- # : 0 00:05:58.745 00:14:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:58.745 00:14:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:58.745 00:14:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:58.745 00:14:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.745 00:14:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.746 00:14:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:58.746 00:14:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:58.746 00:14:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.746 INFO: launching applications... 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68287 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:58.746 Waiting for target to run... 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68287 /var/tmp/spdk_tgt.sock 00:05:58.746 00:14:45 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:58.746 00:14:45 -- common/autotest_common.sh@819 -- # '[' -z 68287 ']' 00:05:58.746 00:14:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.746 00:14:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.746 00:14:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.746 00:14:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.746 00:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:58.746 [2024-07-13 00:14:45.783676] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:58.746 [2024-07-13 00:14:45.783780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68287 ] 00:05:59.311 [2024-07-13 00:14:46.315278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.311 [2024-07-13 00:14:46.406838] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.311 [2024-07-13 00:14:46.407023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.569 00:14:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.569 00:05:59.569 00:14:46 -- common/autotest_common.sh@852 -- # return 0 00:05:59.569 00:14:46 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:59.569 INFO: shutting down applications... 00:05:59.569 00:14:46 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:59.569 00:14:46 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:59.569 00:14:46 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:59.569 00:14:46 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:59.569 00:14:46 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68287 ]] 00:05:59.569 00:14:46 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68287 00:05:59.569 00:14:46 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:59.569 00:14:46 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:59.569 00:14:46 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68287 00:05:59.569 00:14:46 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:00.135 00:14:47 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:00.135 00:14:47 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:00.135 00:14:47 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68287 00:06:00.135 00:14:47 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:00.135 00:14:47 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:00.135 SPDK target shutdown done 00:06:00.135 00:14:47 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:00.135 00:14:47 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:00.135 Success 00:06:00.135 00:14:47 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:00.135 00:06:00.135 real 0m1.613s 00:06:00.135 user 0m1.421s 00:06:00.135 sys 0m0.544s 00:06:00.135 00:14:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.135 00:14:47 -- common/autotest_common.sh@10 -- # set +x 00:06:00.135 ************************************ 00:06:00.135 END TEST json_config_extra_key 00:06:00.135 ************************************ 00:06:00.135 00:14:47 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:00.135 00:14:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.135 00:14:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.135 00:14:47 -- common/autotest_common.sh@10 -- # set +x 00:06:00.135 ************************************ 00:06:00.135 START TEST alias_rpc 00:06:00.135 ************************************ 00:06:00.135 00:14:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:00.393 * Looking for test storage... 00:06:00.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:00.393 00:14:47 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:00.393 00:14:47 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68368 00:06:00.393 00:14:47 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.393 00:14:47 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68368 00:06:00.393 00:14:47 -- common/autotest_common.sh@819 -- # '[' -z 68368 ']' 00:06:00.393 00:14:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.393 00:14:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.393 00:14:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.393 00:14:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.393 00:14:47 -- common/autotest_common.sh@10 -- # set +x 00:06:00.393 [2024-07-13 00:14:47.459509] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:00.393 [2024-07-13 00:14:47.459663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68368 ] 00:06:00.393 [2024-07-13 00:14:47.597654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.675 [2024-07-13 00:14:47.688641] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.675 [2024-07-13 00:14:47.688839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.250 00:14:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.250 00:14:48 -- common/autotest_common.sh@852 -- # return 0 00:06:01.250 00:14:48 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:01.508 00:14:48 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68368 00:06:01.508 00:14:48 -- common/autotest_common.sh@926 -- # '[' -z 68368 ']' 00:06:01.509 00:14:48 -- common/autotest_common.sh@930 -- # kill -0 68368 00:06:01.509 00:14:48 -- common/autotest_common.sh@931 -- # uname 00:06:01.509 00:14:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:01.509 00:14:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68368 00:06:01.509 00:14:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:01.509 00:14:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:01.509 killing process with pid 68368 00:06:01.509 00:14:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68368' 00:06:01.509 00:14:48 -- common/autotest_common.sh@945 -- # kill 68368 00:06:01.509 00:14:48 -- common/autotest_common.sh@950 -- # wait 68368 00:06:02.075 00:06:02.075 real 0m1.776s 00:06:02.075 user 0m1.962s 00:06:02.075 sys 0m0.471s 00:06:02.075 00:14:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.075 00:14:49 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 ************************************ 00:06:02.075 END TEST alias_rpc 00:06:02.075 ************************************ 00:06:02.075 00:14:49 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:06:02.075 00:14:49 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.075 00:14:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.075 00:14:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.075 00:14:49 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 ************************************ 00:06:02.075 START TEST dpdk_mem_utility 00:06:02.075 ************************************ 00:06:02.075 00:14:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.075 * Looking for test storage... 00:06:02.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:02.075 00:14:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:02.075 00:14:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68453 00:06:02.075 00:14:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.075 00:14:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68453 00:06:02.075 00:14:49 -- common/autotest_common.sh@819 -- # '[' -z 68453 ']' 00:06:02.075 00:14:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.075 00:14:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.075 00:14:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.075 00:14:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.075 00:14:49 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 [2024-07-13 00:14:49.294803] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:02.075 [2024-07-13 00:14:49.294930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68453 ] 00:06:02.332 [2024-07-13 00:14:49.429474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.332 [2024-07-13 00:14:49.518691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.333 [2024-07-13 00:14:49.518870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.268 00:14:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.268 00:14:50 -- common/autotest_common.sh@852 -- # return 0 00:06:03.268 00:14:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:03.268 00:14:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:03.268 00:14:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.268 00:14:50 -- common/autotest_common.sh@10 -- # set +x 00:06:03.268 { 00:06:03.268 "filename": "/tmp/spdk_mem_dump.txt" 00:06:03.268 } 00:06:03.268 00:14:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.268 00:14:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:03.268 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:03.268 1 heaps totaling size 814.000000 MiB 00:06:03.268 size: 814.000000 MiB heap id: 0 00:06:03.268 end heaps---------- 00:06:03.268 8 mempools totaling size 598.116089 MiB 00:06:03.268 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:03.268 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:03.268 size: 84.521057 MiB name: bdev_io_68453 00:06:03.268 size: 51.011292 MiB name: evtpool_68453 00:06:03.268 size: 50.003479 MiB name: msgpool_68453 00:06:03.268 size: 21.763794 MiB name: PDU_Pool 00:06:03.268 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:03.268 size: 0.026123 MiB name: Session_Pool 00:06:03.268 end mempools------- 00:06:03.268 6 memzones totaling size 4.142822 MiB 00:06:03.268 size: 1.000366 MiB name: RG_ring_0_68453 00:06:03.268 size: 1.000366 MiB name: RG_ring_1_68453 00:06:03.268 size: 1.000366 MiB name: RG_ring_4_68453 00:06:03.268 size: 1.000366 MiB name: RG_ring_5_68453 00:06:03.268 size: 0.125366 MiB name: RG_ring_2_68453 00:06:03.268 size: 0.015991 MiB name: RG_ring_3_68453 00:06:03.268 end memzones------- 00:06:03.268 00:14:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:03.268 heap id: 0 total size: 814.000000 MiB number of busy elements: 210 number of free elements: 15 00:06:03.268 list of free elements. size: 12.488403 MiB 00:06:03.268 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:03.268 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:03.268 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:03.268 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:03.268 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:03.268 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:03.268 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:03.268 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:03.268 element at address: 0x200000200000 with size: 0.837219 MiB 00:06:03.268 element at address: 0x20001aa00000 with size: 0.572815 MiB 00:06:03.268 element at address: 0x20000b200000 with size: 0.489807 MiB 00:06:03.268 element at address: 0x200000800000 with size: 0.487061 MiB 00:06:03.268 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:03.268 element at address: 0x200027e00000 with size: 0.399048 MiB 00:06:03.268 element at address: 0x200003a00000 with size: 0.351685 MiB 00:06:03.268 list of standard malloc elements. size: 199.249023 MiB 00:06:03.268 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:03.268 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:03.268 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:03.268 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:03.268 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:03.268 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:03.268 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:03.268 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:03.268 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:03.268 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:03.268 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:03.269 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e66280 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e66340 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6cf40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:03.269 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:03.270 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:03.270 list of memzone associated elements. size: 602.262573 MiB 00:06:03.270 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:03.270 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:03.270 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:03.270 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:03.270 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:03.270 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68453_0 00:06:03.270 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:03.270 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68453_0 00:06:03.270 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:03.270 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68453_0 00:06:03.270 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:03.270 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:03.270 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:03.270 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:03.270 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:03.270 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68453 00:06:03.270 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:03.270 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68453 00:06:03.270 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:03.270 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68453 00:06:03.270 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:03.270 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:03.270 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:03.270 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:03.270 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:03.270 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:03.270 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:03.270 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:03.270 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:03.270 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68453 00:06:03.270 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:03.270 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68453 00:06:03.270 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:03.270 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68453 00:06:03.270 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:03.270 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68453 00:06:03.270 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:03.270 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68453 00:06:03.270 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:03.270 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:03.270 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:03.270 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:03.270 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:03.270 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:03.270 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:03.270 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68453 00:06:03.270 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:03.270 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:03.270 element at address: 0x200027e66400 with size: 0.023743 MiB 00:06:03.270 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:03.270 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:03.270 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68453 00:06:03.270 element at address: 0x200027e6c540 with size: 0.002441 MiB 00:06:03.270 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:03.270 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:06:03.270 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68453 00:06:03.270 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:03.270 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68453 00:06:03.270 element at address: 0x200027e6d000 with size: 0.000305 MiB 00:06:03.270 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:03.270 00:14:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:03.270 00:14:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68453 00:06:03.270 00:14:50 -- common/autotest_common.sh@926 -- # '[' -z 68453 ']' 00:06:03.270 00:14:50 -- common/autotest_common.sh@930 -- # kill -0 68453 00:06:03.270 00:14:50 -- common/autotest_common.sh@931 -- # uname 00:06:03.270 00:14:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:03.270 00:14:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68453 00:06:03.270 00:14:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:03.270 00:14:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:03.270 killing process with pid 68453 00:06:03.270 00:14:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68453' 00:06:03.270 00:14:50 -- common/autotest_common.sh@945 -- # kill 68453 00:06:03.270 00:14:50 -- common/autotest_common.sh@950 -- # wait 68453 00:06:03.836 00:06:03.836 real 0m1.628s 00:06:03.836 user 0m1.749s 00:06:03.836 sys 0m0.431s 00:06:03.836 00:14:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.836 00:14:50 -- common/autotest_common.sh@10 -- # set +x 00:06:03.836 ************************************ 00:06:03.836 END TEST dpdk_mem_utility 00:06:03.836 ************************************ 00:06:03.836 00:14:50 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.836 00:14:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.836 00:14:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.836 00:14:50 -- common/autotest_common.sh@10 -- # set +x 00:06:03.836 ************************************ 00:06:03.836 START TEST event 00:06:03.836 ************************************ 00:06:03.836 00:14:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.836 * Looking for test storage... 00:06:03.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:03.836 00:14:50 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:03.836 00:14:50 -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.836 00:14:50 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.837 00:14:50 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:03.837 00:14:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.837 00:14:50 -- common/autotest_common.sh@10 -- # set +x 00:06:03.837 ************************************ 00:06:03.837 START TEST event_perf 00:06:03.837 ************************************ 00:06:03.837 00:14:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.837 Running I/O for 1 seconds...[2024-07-13 00:14:50.944723] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:03.837 [2024-07-13 00:14:50.944828] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68542 ] 00:06:04.095 [2024-07-13 00:14:51.083039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.095 [2024-07-13 00:14:51.179165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.095 [2024-07-13 00:14:51.179308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.095 [2024-07-13 00:14:51.179824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.095 [2024-07-13 00:14:51.179881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.469 Running I/O for 1 seconds... 00:06:05.469 lcore 0: 143232 00:06:05.469 lcore 1: 143228 00:06:05.470 lcore 2: 143229 00:06:05.470 lcore 3: 143230 00:06:05.470 done. 00:06:05.470 00:06:05.470 real 0m1.353s 00:06:05.470 user 0m4.180s 00:06:05.470 sys 0m0.057s 00:06:05.470 00:14:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.470 00:14:52 -- common/autotest_common.sh@10 -- # set +x 00:06:05.470 ************************************ 00:06:05.470 END TEST event_perf 00:06:05.470 ************************************ 00:06:05.470 00:14:52 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:05.470 00:14:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:05.470 00:14:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.470 00:14:52 -- common/autotest_common.sh@10 -- # set +x 00:06:05.470 ************************************ 00:06:05.470 START TEST event_reactor 00:06:05.470 ************************************ 00:06:05.470 00:14:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:05.470 [2024-07-13 00:14:52.347079] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:05.470 [2024-07-13 00:14:52.347166] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68581 ] 00:06:05.470 [2024-07-13 00:14:52.476715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.470 [2024-07-13 00:14:52.539219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.422 test_start 00:06:06.422 oneshot 00:06:06.422 tick 100 00:06:06.422 tick 100 00:06:06.422 tick 250 00:06:06.422 tick 100 00:06:06.422 tick 100 00:06:06.422 tick 100 00:06:06.422 tick 250 00:06:06.422 tick 500 00:06:06.422 tick 100 00:06:06.422 tick 100 00:06:06.422 tick 250 00:06:06.422 tick 100 00:06:06.422 tick 100 00:06:06.422 test_end 00:06:06.422 00:06:06.422 real 0m1.271s 00:06:06.422 user 0m1.104s 00:06:06.422 sys 0m0.061s 00:06:06.422 00:14:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.422 00:14:53 -- common/autotest_common.sh@10 -- # set +x 00:06:06.422 ************************************ 00:06:06.422 END TEST event_reactor 00:06:06.422 ************************************ 00:06:06.422 00:14:53 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.422 00:14:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:06.422 00:14:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.422 00:14:53 -- common/autotest_common.sh@10 -- # set +x 00:06:06.422 ************************************ 00:06:06.422 START TEST event_reactor_perf 00:06:06.422 ************************************ 00:06:06.422 00:14:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.679 [2024-07-13 00:14:53.664194] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:06.679 [2024-07-13 00:14:53.664283] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68616 ] 00:06:06.679 [2024-07-13 00:14:53.801421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.679 [2024-07-13 00:14:53.868776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.053 test_start 00:06:08.053 test_end 00:06:08.053 Performance: 424375 events per second 00:06:08.053 00:06:08.053 real 0m1.286s 00:06:08.053 user 0m1.133s 00:06:08.053 sys 0m0.048s 00:06:08.053 00:14:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.053 00:14:54 -- common/autotest_common.sh@10 -- # set +x 00:06:08.053 ************************************ 00:06:08.053 END TEST event_reactor_perf 00:06:08.053 ************************************ 00:06:08.053 00:14:54 -- event/event.sh@49 -- # uname -s 00:06:08.053 00:14:54 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:08.053 00:14:54 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:08.053 00:14:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.053 00:14:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.053 00:14:54 -- common/autotest_common.sh@10 -- # set +x 00:06:08.053 ************************************ 00:06:08.053 START TEST event_scheduler 00:06:08.053 ************************************ 00:06:08.053 00:14:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:08.053 * Looking for test storage... 00:06:08.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:08.053 00:14:55 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:08.053 00:14:55 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68671 00:06:08.053 00:14:55 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.053 00:14:55 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:08.053 00:14:55 -- scheduler/scheduler.sh@37 -- # waitforlisten 68671 00:06:08.053 00:14:55 -- common/autotest_common.sh@819 -- # '[' -z 68671 ']' 00:06:08.053 00:14:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.053 00:14:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.053 00:14:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.053 00:14:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.053 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.053 [2024-07-13 00:14:55.122008] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:08.053 [2024-07-13 00:14:55.122116] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68671 ] 00:06:08.054 [2024-07-13 00:14:55.264966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.312 [2024-07-13 00:14:55.338913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.312 [2024-07-13 00:14:55.339060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.312 [2024-07-13 00:14:55.339725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.312 [2024-07-13 00:14:55.339720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.312 00:14:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.312 00:14:55 -- common/autotest_common.sh@852 -- # return 0 00:06:08.312 00:14:55 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.312 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.312 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.312 POWER: Env isn't set yet! 00:06:08.312 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:08.312 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.312 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.312 POWER: Attempting to initialise PSTAT power management... 00:06:08.312 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.312 POWER: Cannot set governor of lcore 0 to performance 00:06:08.312 POWER: Attempting to initialise AMD PSTATE power management... 00:06:08.313 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.313 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.313 POWER: Attempting to initialise CPPC power management... 00:06:08.313 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.313 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.313 POWER: Attempting to initialise VM power management... 00:06:08.313 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:08.313 POWER: Unable to set Power Management Environment for lcore 0 00:06:08.313 [2024-07-13 00:14:55.412727] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:08.313 [2024-07-13 00:14:55.412743] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:08.313 [2024-07-13 00:14:55.412753] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:08.313 [2024-07-13 00:14:55.412768] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:08.313 [2024-07-13 00:14:55.412778] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:08.313 [2024-07-13 00:14:55.412787] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:08.313 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.313 00:14:55 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.313 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.313 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.313 [2024-07-13 00:14:55.503487] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:08.313 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.313 00:14:55 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:08.313 00:14:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.313 00:14:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.313 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.313 ************************************ 00:06:08.313 START TEST scheduler_create_thread 00:06:08.313 ************************************ 00:06:08.313 00:14:55 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:08.313 00:14:55 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.313 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.313 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.313 2 00:06:08.313 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.313 00:14:55 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.313 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.313 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.313 3 00:06:08.313 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.313 00:14:55 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.313 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.313 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.572 4 00:06:08.572 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:08.572 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.572 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.572 5 00:06:08.572 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:08.572 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.572 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.572 6 00:06:08.572 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:08.572 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.572 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.572 7 00:06:08.572 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:08.572 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.572 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.572 8 00:06:08.572 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:08.572 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.572 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.572 9 00:06:08.572 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:08.572 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.572 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.572 10 00:06:08.572 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:08.572 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.572 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.572 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:08.572 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.572 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.572 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:08.572 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.572 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:08.572 00:14:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:08.572 00:14:55 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:08.572 00:14:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.572 00:14:55 -- common/autotest_common.sh@10 -- # set +x 00:06:09.522 00:14:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:09.522 00:06:09.522 real 0m1.171s 00:06:09.522 user 0m0.012s 00:06:09.522 sys 0m0.004s 00:06:09.522 00:14:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.522 00:14:56 -- common/autotest_common.sh@10 -- # set +x 00:06:09.522 ************************************ 00:06:09.522 END TEST scheduler_create_thread 00:06:09.522 ************************************ 00:06:09.522 00:14:56 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:09.522 00:14:56 -- scheduler/scheduler.sh@46 -- # killprocess 68671 00:06:09.522 00:14:56 -- common/autotest_common.sh@926 -- # '[' -z 68671 ']' 00:06:09.522 00:14:56 -- common/autotest_common.sh@930 -- # kill -0 68671 00:06:09.522 00:14:56 -- common/autotest_common.sh@931 -- # uname 00:06:09.522 00:14:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:09.522 00:14:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68671 00:06:09.793 00:14:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:09.793 killing process with pid 68671 00:06:09.793 00:14:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:09.793 00:14:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68671' 00:06:09.793 00:14:56 -- common/autotest_common.sh@945 -- # kill 68671 00:06:09.793 00:14:56 -- common/autotest_common.sh@950 -- # wait 68671 00:06:10.052 [2024-07-13 00:14:57.165685] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:10.311 00:06:10.311 real 0m2.376s 00:06:10.311 user 0m2.716s 00:06:10.311 sys 0m0.345s 00:06:10.311 00:14:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.311 00:14:57 -- common/autotest_common.sh@10 -- # set +x 00:06:10.311 ************************************ 00:06:10.311 END TEST event_scheduler 00:06:10.311 ************************************ 00:06:10.311 00:14:57 -- event/event.sh@51 -- # modprobe -n nbd 00:06:10.311 00:14:57 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:10.311 00:14:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:10.311 00:14:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.311 00:14:57 -- common/autotest_common.sh@10 -- # set +x 00:06:10.311 ************************************ 00:06:10.311 START TEST app_repeat 00:06:10.311 ************************************ 00:06:10.311 00:14:57 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:10.311 00:14:57 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.311 00:14:57 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.311 00:14:57 -- event/event.sh@13 -- # local nbd_list 00:06:10.311 00:14:57 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.311 00:14:57 -- event/event.sh@14 -- # local bdev_list 00:06:10.311 00:14:57 -- event/event.sh@15 -- # local repeat_times=4 00:06:10.311 00:14:57 -- event/event.sh@17 -- # modprobe nbd 00:06:10.311 00:14:57 -- event/event.sh@19 -- # repeat_pid=68760 00:06:10.311 00:14:57 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:10.311 00:14:57 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.311 Process app_repeat pid: 68760 00:06:10.311 00:14:57 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68760' 00:06:10.311 00:14:57 -- event/event.sh@23 -- # for i in {0..2} 00:06:10.311 spdk_app_start Round 0 00:06:10.311 00:14:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:10.311 00:14:57 -- event/event.sh@25 -- # waitforlisten 68760 /var/tmp/spdk-nbd.sock 00:06:10.311 00:14:57 -- common/autotest_common.sh@819 -- # '[' -z 68760 ']' 00:06:10.311 00:14:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.311 00:14:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:10.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.311 00:14:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.311 00:14:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:10.311 00:14:57 -- common/autotest_common.sh@10 -- # set +x 00:06:10.311 [2024-07-13 00:14:57.448583] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:10.311 [2024-07-13 00:14:57.448697] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68760 ] 00:06:10.570 [2024-07-13 00:14:57.586890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.570 [2024-07-13 00:14:57.659941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.570 [2024-07-13 00:14:57.659953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.520 00:14:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.520 00:14:58 -- common/autotest_common.sh@852 -- # return 0 00:06:11.520 00:14:58 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.520 Malloc0 00:06:11.520 00:14:58 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.781 Malloc1 00:06:11.781 00:14:58 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@12 -- # local i 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.781 00:14:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.039 /dev/nbd0 00:06:12.039 00:14:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.039 00:14:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.039 00:14:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:12.039 00:14:59 -- common/autotest_common.sh@857 -- # local i 00:06:12.039 00:14:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.039 00:14:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.039 00:14:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:12.039 00:14:59 -- common/autotest_common.sh@861 -- # break 00:06:12.039 00:14:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:12.039 00:14:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:12.039 00:14:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.039 1+0 records in 00:06:12.039 1+0 records out 00:06:12.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175494 s, 23.3 MB/s 00:06:12.039 00:14:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.039 00:14:59 -- common/autotest_common.sh@874 -- # size=4096 00:06:12.039 00:14:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.039 00:14:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:12.039 00:14:59 -- common/autotest_common.sh@877 -- # return 0 00:06:12.039 00:14:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.039 00:14:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.039 00:14:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.297 /dev/nbd1 00:06:12.297 00:14:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.297 00:14:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.297 00:14:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:12.297 00:14:59 -- common/autotest_common.sh@857 -- # local i 00:06:12.297 00:14:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.297 00:14:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.297 00:14:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:12.297 00:14:59 -- common/autotest_common.sh@861 -- # break 00:06:12.297 00:14:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:12.297 00:14:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:12.297 00:14:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.297 1+0 records in 00:06:12.297 1+0 records out 00:06:12.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290511 s, 14.1 MB/s 00:06:12.297 00:14:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.297 00:14:59 -- common/autotest_common.sh@874 -- # size=4096 00:06:12.297 00:14:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.297 00:14:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:12.297 00:14:59 -- common/autotest_common.sh@877 -- # return 0 00:06:12.297 00:14:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.297 00:14:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.297 00:14:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.297 00:14:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.297 00:14:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.555 00:14:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.555 { 00:06:12.555 "bdev_name": "Malloc0", 00:06:12.555 "nbd_device": "/dev/nbd0" 00:06:12.555 }, 00:06:12.555 { 00:06:12.555 "bdev_name": "Malloc1", 00:06:12.555 "nbd_device": "/dev/nbd1" 00:06:12.555 } 00:06:12.555 ]' 00:06:12.555 00:14:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.555 00:14:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.555 { 00:06:12.555 "bdev_name": "Malloc0", 00:06:12.555 "nbd_device": "/dev/nbd0" 00:06:12.555 }, 00:06:12.555 { 00:06:12.555 "bdev_name": "Malloc1", 00:06:12.555 "nbd_device": "/dev/nbd1" 00:06:12.555 } 00:06:12.555 ]' 00:06:12.555 00:14:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.555 /dev/nbd1' 00:06:12.555 00:14:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.556 /dev/nbd1' 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.556 00:14:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.814 256+0 records in 00:06:12.814 256+0 records out 00:06:12.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00723167 s, 145 MB/s 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.814 256+0 records in 00:06:12.814 256+0 records out 00:06:12.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259118 s, 40.5 MB/s 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.814 256+0 records in 00:06:12.814 256+0 records out 00:06:12.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269818 s, 38.9 MB/s 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@51 -- # local i 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.814 00:14:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.072 00:15:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.072 00:15:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.072 00:15:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.072 00:15:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.072 00:15:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.072 00:15:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.072 00:15:00 -- bdev/nbd_common.sh@41 -- # break 00:06:13.072 00:15:00 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.072 00:15:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.072 00:15:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.330 00:15:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.330 00:15:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.330 00:15:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.330 00:15:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.330 00:15:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.330 00:15:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.330 00:15:00 -- bdev/nbd_common.sh@41 -- # break 00:06:13.330 00:15:00 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.330 00:15:00 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.330 00:15:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.330 00:15:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@65 -- # true 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.589 00:15:00 -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.589 00:15:00 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.847 00:15:00 -- event/event.sh@35 -- # sleep 3 00:06:14.106 [2024-07-13 00:15:01.155127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.106 [2024-07-13 00:15:01.220750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.106 [2024-07-13 00:15:01.220762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.106 [2024-07-13 00:15:01.276278] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.106 [2024-07-13 00:15:01.276360] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.390 00:15:03 -- event/event.sh@23 -- # for i in {0..2} 00:06:17.390 spdk_app_start Round 1 00:06:17.390 00:15:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:17.390 00:15:03 -- event/event.sh@25 -- # waitforlisten 68760 /var/tmp/spdk-nbd.sock 00:06:17.391 00:15:03 -- common/autotest_common.sh@819 -- # '[' -z 68760 ']' 00:06:17.391 00:15:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.391 00:15:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.391 00:15:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.391 00:15:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.391 00:15:03 -- common/autotest_common.sh@10 -- # set +x 00:06:17.391 00:15:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.391 00:15:04 -- common/autotest_common.sh@852 -- # return 0 00:06:17.391 00:15:04 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.391 Malloc0 00:06:17.391 00:15:04 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.649 Malloc1 00:06:17.649 00:15:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@12 -- # local i 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.649 00:15:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.907 /dev/nbd0 00:06:17.907 00:15:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.907 00:15:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.907 00:15:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:17.907 00:15:05 -- common/autotest_common.sh@857 -- # local i 00:06:17.907 00:15:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:17.907 00:15:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:17.907 00:15:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:17.907 00:15:05 -- common/autotest_common.sh@861 -- # break 00:06:17.907 00:15:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:17.907 00:15:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:17.907 00:15:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.907 1+0 records in 00:06:17.907 1+0 records out 00:06:17.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216345 s, 18.9 MB/s 00:06:17.907 00:15:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.907 00:15:05 -- common/autotest_common.sh@874 -- # size=4096 00:06:17.907 00:15:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.907 00:15:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:17.907 00:15:05 -- common/autotest_common.sh@877 -- # return 0 00:06:17.907 00:15:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.907 00:15:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.907 00:15:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.166 /dev/nbd1 00:06:18.166 00:15:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.166 00:15:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.166 00:15:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:18.166 00:15:05 -- common/autotest_common.sh@857 -- # local i 00:06:18.166 00:15:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:18.166 00:15:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:18.166 00:15:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:18.166 00:15:05 -- common/autotest_common.sh@861 -- # break 00:06:18.166 00:15:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:18.166 00:15:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:18.166 00:15:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.166 1+0 records in 00:06:18.166 1+0 records out 00:06:18.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284649 s, 14.4 MB/s 00:06:18.166 00:15:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.166 00:15:05 -- common/autotest_common.sh@874 -- # size=4096 00:06:18.166 00:15:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.166 00:15:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:18.166 00:15:05 -- common/autotest_common.sh@877 -- # return 0 00:06:18.166 00:15:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.166 00:15:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.166 00:15:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.166 00:15:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.166 00:15:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.425 00:15:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.425 { 00:06:18.425 "bdev_name": "Malloc0", 00:06:18.425 "nbd_device": "/dev/nbd0" 00:06:18.425 }, 00:06:18.425 { 00:06:18.425 "bdev_name": "Malloc1", 00:06:18.425 "nbd_device": "/dev/nbd1" 00:06:18.425 } 00:06:18.425 ]' 00:06:18.425 00:15:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.425 00:15:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.425 { 00:06:18.425 "bdev_name": "Malloc0", 00:06:18.425 "nbd_device": "/dev/nbd0" 00:06:18.425 }, 00:06:18.425 { 00:06:18.425 "bdev_name": "Malloc1", 00:06:18.425 "nbd_device": "/dev/nbd1" 00:06:18.425 } 00:06:18.425 ]' 00:06:18.425 00:15:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.425 /dev/nbd1' 00:06:18.425 00:15:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.425 00:15:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.425 /dev/nbd1' 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.684 256+0 records in 00:06:18.684 256+0 records out 00:06:18.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00697151 s, 150 MB/s 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.684 256+0 records in 00:06:18.684 256+0 records out 00:06:18.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269223 s, 38.9 MB/s 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.684 256+0 records in 00:06:18.684 256+0 records out 00:06:18.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298098 s, 35.2 MB/s 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.684 00:15:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.685 00:15:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.685 00:15:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.685 00:15:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.685 00:15:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.685 00:15:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.685 00:15:05 -- bdev/nbd_common.sh@51 -- # local i 00:06:18.685 00:15:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.685 00:15:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.943 00:15:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.943 00:15:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.943 00:15:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.943 00:15:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.943 00:15:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.943 00:15:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.943 00:15:05 -- bdev/nbd_common.sh@41 -- # break 00:06:18.943 00:15:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.943 00:15:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.943 00:15:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.202 00:15:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.202 00:15:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.202 00:15:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.202 00:15:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.202 00:15:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.202 00:15:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.202 00:15:06 -- bdev/nbd_common.sh@41 -- # break 00:06:19.202 00:15:06 -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.202 00:15:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.202 00:15:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.202 00:15:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@65 -- # true 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.460 00:15:06 -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.460 00:15:06 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.718 00:15:06 -- event/event.sh@35 -- # sleep 3 00:06:19.976 [2024-07-13 00:15:07.059768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.976 [2024-07-13 00:15:07.177514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.976 [2024-07-13 00:15:07.177543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.233 [2024-07-13 00:15:07.259762] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.233 [2024-07-13 00:15:07.259864] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.761 00:15:09 -- event/event.sh@23 -- # for i in {0..2} 00:06:22.761 spdk_app_start Round 2 00:06:22.761 00:15:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:22.761 00:15:09 -- event/event.sh@25 -- # waitforlisten 68760 /var/tmp/spdk-nbd.sock 00:06:22.761 00:15:09 -- common/autotest_common.sh@819 -- # '[' -z 68760 ']' 00:06:22.761 00:15:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.761 00:15:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.761 00:15:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.761 00:15:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.761 00:15:09 -- common/autotest_common.sh@10 -- # set +x 00:06:23.020 00:15:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.020 00:15:10 -- common/autotest_common.sh@852 -- # return 0 00:06:23.020 00:15:10 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.278 Malloc0 00:06:23.279 00:15:10 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.537 Malloc1 00:06:23.537 00:15:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@12 -- # local i 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.537 00:15:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.795 /dev/nbd0 00:06:23.795 00:15:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.795 00:15:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.795 00:15:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:23.795 00:15:10 -- common/autotest_common.sh@857 -- # local i 00:06:23.795 00:15:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:23.795 00:15:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:23.795 00:15:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:23.795 00:15:10 -- common/autotest_common.sh@861 -- # break 00:06:23.795 00:15:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:23.795 00:15:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:23.795 00:15:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.795 1+0 records in 00:06:23.795 1+0 records out 00:06:23.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222549 s, 18.4 MB/s 00:06:23.795 00:15:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.795 00:15:10 -- common/autotest_common.sh@874 -- # size=4096 00:06:23.795 00:15:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.795 00:15:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:23.795 00:15:10 -- common/autotest_common.sh@877 -- # return 0 00:06:23.795 00:15:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.795 00:15:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.795 00:15:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.054 /dev/nbd1 00:06:24.054 00:15:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.054 00:15:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.054 00:15:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:24.054 00:15:11 -- common/autotest_common.sh@857 -- # local i 00:06:24.054 00:15:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:24.054 00:15:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:24.054 00:15:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:24.054 00:15:11 -- common/autotest_common.sh@861 -- # break 00:06:24.054 00:15:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:24.054 00:15:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:24.054 00:15:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.054 1+0 records in 00:06:24.054 1+0 records out 00:06:24.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277921 s, 14.7 MB/s 00:06:24.054 00:15:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.054 00:15:11 -- common/autotest_common.sh@874 -- # size=4096 00:06:24.054 00:15:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.054 00:15:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:24.054 00:15:11 -- common/autotest_common.sh@877 -- # return 0 00:06:24.054 00:15:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.054 00:15:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.054 00:15:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.054 00:15:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.054 00:15:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.313 { 00:06:24.313 "bdev_name": "Malloc0", 00:06:24.313 "nbd_device": "/dev/nbd0" 00:06:24.313 }, 00:06:24.313 { 00:06:24.313 "bdev_name": "Malloc1", 00:06:24.313 "nbd_device": "/dev/nbd1" 00:06:24.313 } 00:06:24.313 ]' 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.313 { 00:06:24.313 "bdev_name": "Malloc0", 00:06:24.313 "nbd_device": "/dev/nbd0" 00:06:24.313 }, 00:06:24.313 { 00:06:24.313 "bdev_name": "Malloc1", 00:06:24.313 "nbd_device": "/dev/nbd1" 00:06:24.313 } 00:06:24.313 ]' 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.313 /dev/nbd1' 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.313 /dev/nbd1' 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.313 256+0 records in 00:06:24.313 256+0 records out 00:06:24.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00591295 s, 177 MB/s 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.313 00:15:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.570 256+0 records in 00:06:24.571 256+0 records out 00:06:24.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251861 s, 41.6 MB/s 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.571 256+0 records in 00:06:24.571 256+0 records out 00:06:24.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274605 s, 38.2 MB/s 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@51 -- # local i 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.571 00:15:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.829 00:15:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.829 00:15:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.829 00:15:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.829 00:15:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.829 00:15:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.829 00:15:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.829 00:15:11 -- bdev/nbd_common.sh@41 -- # break 00:06:24.829 00:15:11 -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.829 00:15:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.829 00:15:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.086 00:15:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.086 00:15:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.086 00:15:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.086 00:15:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.086 00:15:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.086 00:15:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.086 00:15:12 -- bdev/nbd_common.sh@41 -- # break 00:06:25.086 00:15:12 -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.087 00:15:12 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.087 00:15:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.087 00:15:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@65 -- # true 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.345 00:15:12 -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.345 00:15:12 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.603 00:15:12 -- event/event.sh@35 -- # sleep 3 00:06:25.861 [2024-07-13 00:15:12.859432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.861 [2024-07-13 00:15:12.904571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.861 [2024-07-13 00:15:12.904573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.861 [2024-07-13 00:15:12.956811] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.861 [2024-07-13 00:15:12.956899] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.144 00:15:15 -- event/event.sh@38 -- # waitforlisten 68760 /var/tmp/spdk-nbd.sock 00:06:29.144 00:15:15 -- common/autotest_common.sh@819 -- # '[' -z 68760 ']' 00:06:29.144 00:15:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.144 00:15:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.144 00:15:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.144 00:15:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.144 00:15:15 -- common/autotest_common.sh@10 -- # set +x 00:06:29.144 00:15:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.144 00:15:15 -- common/autotest_common.sh@852 -- # return 0 00:06:29.144 00:15:15 -- event/event.sh@39 -- # killprocess 68760 00:06:29.144 00:15:15 -- common/autotest_common.sh@926 -- # '[' -z 68760 ']' 00:06:29.144 00:15:15 -- common/autotest_common.sh@930 -- # kill -0 68760 00:06:29.144 00:15:15 -- common/autotest_common.sh@931 -- # uname 00:06:29.144 00:15:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.144 00:15:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68760 00:06:29.144 00:15:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.144 00:15:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.144 killing process with pid 68760 00:06:29.144 00:15:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68760' 00:06:29.144 00:15:15 -- common/autotest_common.sh@945 -- # kill 68760 00:06:29.144 00:15:15 -- common/autotest_common.sh@950 -- # wait 68760 00:06:29.144 spdk_app_start is called in Round 0. 00:06:29.144 Shutdown signal received, stop current app iteration 00:06:29.144 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:29.144 spdk_app_start is called in Round 1. 00:06:29.144 Shutdown signal received, stop current app iteration 00:06:29.144 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:29.144 spdk_app_start is called in Round 2. 00:06:29.144 Shutdown signal received, stop current app iteration 00:06:29.144 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:29.144 spdk_app_start is called in Round 3. 00:06:29.144 Shutdown signal received, stop current app iteration 00:06:29.144 00:15:16 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:29.144 00:15:16 -- event/event.sh@42 -- # return 0 00:06:29.144 00:06:29.144 real 0m18.745s 00:06:29.144 user 0m41.718s 00:06:29.144 sys 0m3.137s 00:06:29.144 00:15:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.144 00:15:16 -- common/autotest_common.sh@10 -- # set +x 00:06:29.144 ************************************ 00:06:29.144 END TEST app_repeat 00:06:29.144 ************************************ 00:06:29.144 00:15:16 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:29.144 00:15:16 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:29.144 00:15:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:29.144 00:15:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.144 00:15:16 -- common/autotest_common.sh@10 -- # set +x 00:06:29.144 ************************************ 00:06:29.144 START TEST cpu_locks 00:06:29.144 ************************************ 00:06:29.144 00:15:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:29.144 * Looking for test storage... 00:06:29.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:29.144 00:15:16 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:29.144 00:15:16 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:29.144 00:15:16 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:29.144 00:15:16 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:29.144 00:15:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:29.144 00:15:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.144 00:15:16 -- common/autotest_common.sh@10 -- # set +x 00:06:29.144 ************************************ 00:06:29.144 START TEST default_locks 00:06:29.144 ************************************ 00:06:29.144 00:15:16 -- common/autotest_common.sh@1104 -- # default_locks 00:06:29.144 00:15:16 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69378 00:06:29.144 00:15:16 -- event/cpu_locks.sh@47 -- # waitforlisten 69378 00:06:29.144 00:15:16 -- common/autotest_common.sh@819 -- # '[' -z 69378 ']' 00:06:29.144 00:15:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.144 00:15:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.144 00:15:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.144 00:15:16 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.144 00:15:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.144 00:15:16 -- common/autotest_common.sh@10 -- # set +x 00:06:29.403 [2024-07-13 00:15:16.375194] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:29.403 [2024-07-13 00:15:16.375288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69378 ] 00:06:29.403 [2024-07-13 00:15:16.516971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.403 [2024-07-13 00:15:16.605820] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.403 [2024-07-13 00:15:16.606014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.340 00:15:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.340 00:15:17 -- common/autotest_common.sh@852 -- # return 0 00:06:30.340 00:15:17 -- event/cpu_locks.sh@49 -- # locks_exist 69378 00:06:30.340 00:15:17 -- event/cpu_locks.sh@22 -- # lslocks -p 69378 00:06:30.340 00:15:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.599 00:15:17 -- event/cpu_locks.sh@50 -- # killprocess 69378 00:06:30.599 00:15:17 -- common/autotest_common.sh@926 -- # '[' -z 69378 ']' 00:06:30.599 00:15:17 -- common/autotest_common.sh@930 -- # kill -0 69378 00:06:30.599 00:15:17 -- common/autotest_common.sh@931 -- # uname 00:06:30.599 00:15:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:30.599 00:15:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69378 00:06:30.599 00:15:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:30.599 00:15:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:30.599 killing process with pid 69378 00:06:30.599 00:15:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69378' 00:06:30.599 00:15:17 -- common/autotest_common.sh@945 -- # kill 69378 00:06:30.599 00:15:17 -- common/autotest_common.sh@950 -- # wait 69378 00:06:30.858 00:15:18 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69378 00:06:30.858 00:15:18 -- common/autotest_common.sh@640 -- # local es=0 00:06:30.858 00:15:18 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69378 00:06:30.858 00:15:18 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:30.858 00:15:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:30.858 00:15:18 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:30.858 00:15:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:30.858 00:15:18 -- common/autotest_common.sh@643 -- # waitforlisten 69378 00:06:30.858 00:15:18 -- common/autotest_common.sh@819 -- # '[' -z 69378 ']' 00:06:30.858 00:15:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.858 00:15:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.858 00:15:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.858 00:15:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.858 00:15:18 -- common/autotest_common.sh@10 -- # set +x 00:06:30.858 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69378) - No such process 00:06:30.858 ERROR: process (pid: 69378) is no longer running 00:06:30.858 00:15:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.858 00:15:18 -- common/autotest_common.sh@852 -- # return 1 00:06:30.858 00:15:18 -- common/autotest_common.sh@643 -- # es=1 00:06:30.858 00:15:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:30.858 00:15:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:30.858 00:15:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:30.858 00:15:18 -- event/cpu_locks.sh@54 -- # no_locks 00:06:30.858 00:15:18 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.858 00:15:18 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.858 00:15:18 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.858 00:06:30.858 real 0m1.769s 00:06:30.858 user 0m1.891s 00:06:30.858 sys 0m0.523s 00:06:30.858 00:15:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.858 00:15:18 -- common/autotest_common.sh@10 -- # set +x 00:06:30.858 ************************************ 00:06:30.858 END TEST default_locks 00:06:30.858 ************************************ 00:06:31.133 00:15:18 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:31.133 00:15:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:31.133 00:15:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.133 00:15:18 -- common/autotest_common.sh@10 -- # set +x 00:06:31.133 ************************************ 00:06:31.133 START TEST default_locks_via_rpc 00:06:31.133 ************************************ 00:06:31.133 00:15:18 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:31.133 00:15:18 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69442 00:06:31.133 00:15:18 -- event/cpu_locks.sh@63 -- # waitforlisten 69442 00:06:31.133 00:15:18 -- common/autotest_common.sh@819 -- # '[' -z 69442 ']' 00:06:31.133 00:15:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.133 00:15:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.133 00:15:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.133 00:15:18 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.133 00:15:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.133 00:15:18 -- common/autotest_common.sh@10 -- # set +x 00:06:31.133 [2024-07-13 00:15:18.204373] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:31.133 [2024-07-13 00:15:18.204499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69442 ] 00:06:31.133 [2024-07-13 00:15:18.343791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.405 [2024-07-13 00:15:18.421964] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.405 [2024-07-13 00:15:18.422157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.340 00:15:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.340 00:15:19 -- common/autotest_common.sh@852 -- # return 0 00:06:32.340 00:15:19 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:32.340 00:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:32.340 00:15:19 -- common/autotest_common.sh@10 -- # set +x 00:06:32.340 00:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:32.340 00:15:19 -- event/cpu_locks.sh@67 -- # no_locks 00:06:32.340 00:15:19 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.340 00:15:19 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.340 00:15:19 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.340 00:15:19 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.340 00:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:32.340 00:15:19 -- common/autotest_common.sh@10 -- # set +x 00:06:32.340 00:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:32.340 00:15:19 -- event/cpu_locks.sh@71 -- # locks_exist 69442 00:06:32.340 00:15:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.340 00:15:19 -- event/cpu_locks.sh@22 -- # lslocks -p 69442 00:06:32.599 00:15:19 -- event/cpu_locks.sh@73 -- # killprocess 69442 00:06:32.599 00:15:19 -- common/autotest_common.sh@926 -- # '[' -z 69442 ']' 00:06:32.599 00:15:19 -- common/autotest_common.sh@930 -- # kill -0 69442 00:06:32.599 00:15:19 -- common/autotest_common.sh@931 -- # uname 00:06:32.599 00:15:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.599 00:15:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69442 00:06:32.599 00:15:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.599 00:15:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.599 killing process with pid 69442 00:06:32.599 00:15:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69442' 00:06:32.599 00:15:19 -- common/autotest_common.sh@945 -- # kill 69442 00:06:32.599 00:15:19 -- common/autotest_common.sh@950 -- # wait 69442 00:06:33.166 00:06:33.166 real 0m1.982s 00:06:33.166 user 0m2.163s 00:06:33.166 sys 0m0.586s 00:06:33.166 00:15:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.166 00:15:20 -- common/autotest_common.sh@10 -- # set +x 00:06:33.166 ************************************ 00:06:33.166 END TEST default_locks_via_rpc 00:06:33.166 ************************************ 00:06:33.166 00:15:20 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:33.166 00:15:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:33.166 00:15:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.166 00:15:20 -- common/autotest_common.sh@10 -- # set +x 00:06:33.166 ************************************ 00:06:33.166 START TEST non_locking_app_on_locked_coremask 00:06:33.166 ************************************ 00:06:33.166 00:15:20 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:33.166 00:15:20 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69511 00:06:33.166 00:15:20 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.166 00:15:20 -- event/cpu_locks.sh@81 -- # waitforlisten 69511 /var/tmp/spdk.sock 00:06:33.166 00:15:20 -- common/autotest_common.sh@819 -- # '[' -z 69511 ']' 00:06:33.166 00:15:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.166 00:15:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.166 00:15:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.166 00:15:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.166 00:15:20 -- common/autotest_common.sh@10 -- # set +x 00:06:33.166 [2024-07-13 00:15:20.228188] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:33.166 [2024-07-13 00:15:20.228292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69511 ] 00:06:33.166 [2024-07-13 00:15:20.369771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.425 [2024-07-13 00:15:20.454855] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.425 [2024-07-13 00:15:20.455007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.993 00:15:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.993 00:15:21 -- common/autotest_common.sh@852 -- # return 0 00:06:33.993 00:15:21 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69539 00:06:33.993 00:15:21 -- event/cpu_locks.sh@85 -- # waitforlisten 69539 /var/tmp/spdk2.sock 00:06:33.993 00:15:21 -- common/autotest_common.sh@819 -- # '[' -z 69539 ']' 00:06:33.993 00:15:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.993 00:15:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.993 00:15:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.993 00:15:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.993 00:15:21 -- common/autotest_common.sh@10 -- # set +x 00:06:33.993 00:15:21 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:34.251 [2024-07-13 00:15:21.265141] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:34.252 [2024-07-13 00:15:21.265252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69539 ] 00:06:34.252 [2024-07-13 00:15:21.407969] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.252 [2024-07-13 00:15:21.408034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.510 [2024-07-13 00:15:21.599955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.510 [2024-07-13 00:15:21.600141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.077 00:15:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.077 00:15:22 -- common/autotest_common.sh@852 -- # return 0 00:06:35.077 00:15:22 -- event/cpu_locks.sh@87 -- # locks_exist 69511 00:06:35.077 00:15:22 -- event/cpu_locks.sh@22 -- # lslocks -p 69511 00:06:35.077 00:15:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.013 00:15:23 -- event/cpu_locks.sh@89 -- # killprocess 69511 00:06:36.013 00:15:23 -- common/autotest_common.sh@926 -- # '[' -z 69511 ']' 00:06:36.013 00:15:23 -- common/autotest_common.sh@930 -- # kill -0 69511 00:06:36.013 00:15:23 -- common/autotest_common.sh@931 -- # uname 00:06:36.013 00:15:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.013 00:15:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69511 00:06:36.013 00:15:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.013 00:15:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.013 killing process with pid 69511 00:06:36.013 00:15:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69511' 00:06:36.013 00:15:23 -- common/autotest_common.sh@945 -- # kill 69511 00:06:36.013 00:15:23 -- common/autotest_common.sh@950 -- # wait 69511 00:06:36.946 00:15:23 -- event/cpu_locks.sh@90 -- # killprocess 69539 00:06:36.946 00:15:23 -- common/autotest_common.sh@926 -- # '[' -z 69539 ']' 00:06:36.946 00:15:23 -- common/autotest_common.sh@930 -- # kill -0 69539 00:06:36.946 00:15:23 -- common/autotest_common.sh@931 -- # uname 00:06:36.946 00:15:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.946 00:15:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69539 00:06:36.946 00:15:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.946 00:15:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.946 killing process with pid 69539 00:06:36.946 00:15:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69539' 00:06:36.946 00:15:23 -- common/autotest_common.sh@945 -- # kill 69539 00:06:36.946 00:15:23 -- common/autotest_common.sh@950 -- # wait 69539 00:06:37.206 00:06:37.206 real 0m4.057s 00:06:37.206 user 0m4.499s 00:06:37.206 sys 0m1.151s 00:06:37.206 00:15:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.206 00:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:37.206 ************************************ 00:06:37.206 END TEST non_locking_app_on_locked_coremask 00:06:37.206 ************************************ 00:06:37.206 00:15:24 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:37.206 00:15:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.206 00:15:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.206 00:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:37.206 ************************************ 00:06:37.206 START TEST locking_app_on_unlocked_coremask 00:06:37.206 ************************************ 00:06:37.206 00:15:24 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:37.206 00:15:24 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69618 00:06:37.206 00:15:24 -- event/cpu_locks.sh@99 -- # waitforlisten 69618 /var/tmp/spdk.sock 00:06:37.206 00:15:24 -- common/autotest_common.sh@819 -- # '[' -z 69618 ']' 00:06:37.206 00:15:24 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:37.206 00:15:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.206 00:15:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.206 00:15:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.206 00:15:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.206 00:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:37.206 [2024-07-13 00:15:24.340353] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:37.206 [2024-07-13 00:15:24.340441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69618 ] 00:06:37.464 [2024-07-13 00:15:24.478878] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.464 [2024-07-13 00:15:24.478934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.464 [2024-07-13 00:15:24.550144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.464 [2024-07-13 00:15:24.550289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.401 00:15:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.401 00:15:25 -- common/autotest_common.sh@852 -- # return 0 00:06:38.401 00:15:25 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69646 00:06:38.401 00:15:25 -- event/cpu_locks.sh@103 -- # waitforlisten 69646 /var/tmp/spdk2.sock 00:06:38.401 00:15:25 -- common/autotest_common.sh@819 -- # '[' -z 69646 ']' 00:06:38.401 00:15:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.401 00:15:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.401 00:15:25 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.401 00:15:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.401 00:15:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.401 00:15:25 -- common/autotest_common.sh@10 -- # set +x 00:06:38.401 [2024-07-13 00:15:25.369296] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:38.401 [2024-07-13 00:15:25.369388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69646 ] 00:06:38.401 [2024-07-13 00:15:25.518146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.660 [2024-07-13 00:15:25.710991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.660 [2024-07-13 00:15:25.711242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.229 00:15:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.229 00:15:26 -- common/autotest_common.sh@852 -- # return 0 00:06:39.229 00:15:26 -- event/cpu_locks.sh@105 -- # locks_exist 69646 00:06:39.229 00:15:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.229 00:15:26 -- event/cpu_locks.sh@22 -- # lslocks -p 69646 00:06:40.163 00:15:27 -- event/cpu_locks.sh@107 -- # killprocess 69618 00:06:40.163 00:15:27 -- common/autotest_common.sh@926 -- # '[' -z 69618 ']' 00:06:40.163 00:15:27 -- common/autotest_common.sh@930 -- # kill -0 69618 00:06:40.164 00:15:27 -- common/autotest_common.sh@931 -- # uname 00:06:40.164 00:15:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.164 00:15:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69618 00:06:40.164 killing process with pid 69618 00:06:40.164 00:15:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:40.164 00:15:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:40.164 00:15:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69618' 00:06:40.164 00:15:27 -- common/autotest_common.sh@945 -- # kill 69618 00:06:40.164 00:15:27 -- common/autotest_common.sh@950 -- # wait 69618 00:06:41.114 00:15:28 -- event/cpu_locks.sh@108 -- # killprocess 69646 00:06:41.114 00:15:28 -- common/autotest_common.sh@926 -- # '[' -z 69646 ']' 00:06:41.114 00:15:28 -- common/autotest_common.sh@930 -- # kill -0 69646 00:06:41.114 00:15:28 -- common/autotest_common.sh@931 -- # uname 00:06:41.114 00:15:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.114 00:15:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69646 00:06:41.114 killing process with pid 69646 00:06:41.114 00:15:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.114 00:15:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.114 00:15:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69646' 00:06:41.114 00:15:28 -- common/autotest_common.sh@945 -- # kill 69646 00:06:41.114 00:15:28 -- common/autotest_common.sh@950 -- # wait 69646 00:06:41.690 00:06:41.690 real 0m4.385s 00:06:41.690 user 0m4.821s 00:06:41.690 sys 0m1.153s 00:06:41.690 00:15:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.690 00:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 ************************************ 00:06:41.690 END TEST locking_app_on_unlocked_coremask 00:06:41.690 ************************************ 00:06:41.690 00:15:28 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:41.690 00:15:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.690 00:15:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.690 00:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 ************************************ 00:06:41.690 START TEST locking_app_on_locked_coremask 00:06:41.690 ************************************ 00:06:41.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.690 00:15:28 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:41.690 00:15:28 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69725 00:06:41.690 00:15:28 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.690 00:15:28 -- event/cpu_locks.sh@116 -- # waitforlisten 69725 /var/tmp/spdk.sock 00:06:41.690 00:15:28 -- common/autotest_common.sh@819 -- # '[' -z 69725 ']' 00:06:41.690 00:15:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.690 00:15:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.690 00:15:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.690 00:15:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.690 00:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 [2024-07-13 00:15:28.784136] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:41.690 [2024-07-13 00:15:28.784516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69725 ] 00:06:41.950 [2024-07-13 00:15:28.925636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.950 [2024-07-13 00:15:28.996557] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.950 [2024-07-13 00:15:28.997078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.884 00:15:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:42.884 00:15:29 -- common/autotest_common.sh@852 -- # return 0 00:06:42.884 00:15:29 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69753 00:06:42.884 00:15:29 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.884 00:15:29 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69753 /var/tmp/spdk2.sock 00:06:42.885 00:15:29 -- common/autotest_common.sh@640 -- # local es=0 00:06:42.885 00:15:29 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69753 /var/tmp/spdk2.sock 00:06:42.885 00:15:29 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:42.885 00:15:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:42.885 00:15:29 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:42.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.885 00:15:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:42.885 00:15:29 -- common/autotest_common.sh@643 -- # waitforlisten 69753 /var/tmp/spdk2.sock 00:06:42.885 00:15:29 -- common/autotest_common.sh@819 -- # '[' -z 69753 ']' 00:06:42.885 00:15:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.885 00:15:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.885 00:15:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.885 00:15:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.885 00:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:42.885 [2024-07-13 00:15:29.817826] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:42.885 [2024-07-13 00:15:29.817928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69753 ] 00:06:42.885 [2024-07-13 00:15:29.959505] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69725 has claimed it. 00:06:42.885 [2024-07-13 00:15:29.959610] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.451 ERROR: process (pid: 69753) is no longer running 00:06:43.451 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69753) - No such process 00:06:43.451 00:15:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.451 00:15:30 -- common/autotest_common.sh@852 -- # return 1 00:06:43.451 00:15:30 -- common/autotest_common.sh@643 -- # es=1 00:06:43.451 00:15:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:43.451 00:15:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:43.451 00:15:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:43.451 00:15:30 -- event/cpu_locks.sh@122 -- # locks_exist 69725 00:06:43.451 00:15:30 -- event/cpu_locks.sh@22 -- # lslocks -p 69725 00:06:43.451 00:15:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.020 00:15:30 -- event/cpu_locks.sh@124 -- # killprocess 69725 00:06:44.020 00:15:30 -- common/autotest_common.sh@926 -- # '[' -z 69725 ']' 00:06:44.020 00:15:30 -- common/autotest_common.sh@930 -- # kill -0 69725 00:06:44.020 00:15:30 -- common/autotest_common.sh@931 -- # uname 00:06:44.020 00:15:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:44.020 00:15:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69725 00:06:44.020 killing process with pid 69725 00:06:44.020 00:15:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:44.020 00:15:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:44.020 00:15:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69725' 00:06:44.020 00:15:30 -- common/autotest_common.sh@945 -- # kill 69725 00:06:44.020 00:15:30 -- common/autotest_common.sh@950 -- # wait 69725 00:06:44.279 ************************************ 00:06:44.279 END TEST locking_app_on_locked_coremask 00:06:44.279 00:06:44.279 real 0m2.658s 00:06:44.279 user 0m2.974s 00:06:44.279 sys 0m0.753s 00:06:44.279 00:15:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.279 00:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:44.279 ************************************ 00:06:44.279 00:15:31 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:44.279 00:15:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.279 00:15:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.279 00:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:44.279 ************************************ 00:06:44.279 START TEST locking_overlapped_coremask 00:06:44.279 ************************************ 00:06:44.279 00:15:31 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:44.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.279 00:15:31 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69812 00:06:44.279 00:15:31 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:44.279 00:15:31 -- event/cpu_locks.sh@133 -- # waitforlisten 69812 /var/tmp/spdk.sock 00:06:44.279 00:15:31 -- common/autotest_common.sh@819 -- # '[' -z 69812 ']' 00:06:44.279 00:15:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.279 00:15:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:44.279 00:15:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.279 00:15:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:44.279 00:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:44.279 [2024-07-13 00:15:31.488339] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:44.279 [2024-07-13 00:15:31.488669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69812 ] 00:06:44.537 [2024-07-13 00:15:31.629281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.537 [2024-07-13 00:15:31.726898] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.537 [2024-07-13 00:15:31.727537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.537 [2024-07-13 00:15:31.727683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.537 [2024-07-13 00:15:31.727683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.470 00:15:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:45.470 00:15:32 -- common/autotest_common.sh@852 -- # return 0 00:06:45.470 00:15:32 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69842 00:06:45.470 00:15:32 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69842 /var/tmp/spdk2.sock 00:06:45.470 00:15:32 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:45.470 00:15:32 -- common/autotest_common.sh@640 -- # local es=0 00:06:45.470 00:15:32 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69842 /var/tmp/spdk2.sock 00:06:45.470 00:15:32 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:45.470 00:15:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.470 00:15:32 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:45.470 00:15:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.470 00:15:32 -- common/autotest_common.sh@643 -- # waitforlisten 69842 /var/tmp/spdk2.sock 00:06:45.470 00:15:32 -- common/autotest_common.sh@819 -- # '[' -z 69842 ']' 00:06:45.470 00:15:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.470 00:15:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.470 00:15:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.470 00:15:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.470 00:15:32 -- common/autotest_common.sh@10 -- # set +x 00:06:45.470 [2024-07-13 00:15:32.501104] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:45.470 [2024-07-13 00:15:32.501818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69842 ] 00:06:45.470 [2024-07-13 00:15:32.653034] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69812 has claimed it. 00:06:45.470 [2024-07-13 00:15:32.653097] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.034 ERROR: process (pid: 69842) is no longer running 00:06:46.034 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69842) - No such process 00:06:46.034 00:15:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.034 00:15:33 -- common/autotest_common.sh@852 -- # return 1 00:06:46.034 00:15:33 -- common/autotest_common.sh@643 -- # es=1 00:06:46.034 00:15:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.034 00:15:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:46.034 00:15:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.034 00:15:33 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:46.034 00:15:33 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.034 00:15:33 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.034 00:15:33 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.034 00:15:33 -- event/cpu_locks.sh@141 -- # killprocess 69812 00:06:46.034 00:15:33 -- common/autotest_common.sh@926 -- # '[' -z 69812 ']' 00:06:46.034 00:15:33 -- common/autotest_common.sh@930 -- # kill -0 69812 00:06:46.034 00:15:33 -- common/autotest_common.sh@931 -- # uname 00:06:46.034 00:15:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:46.034 00:15:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69812 00:06:46.034 killing process with pid 69812 00:06:46.034 00:15:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:46.034 00:15:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:46.034 00:15:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69812' 00:06:46.034 00:15:33 -- common/autotest_common.sh@945 -- # kill 69812 00:06:46.034 00:15:33 -- common/autotest_common.sh@950 -- # wait 69812 00:06:46.599 00:06:46.599 real 0m2.185s 00:06:46.599 user 0m6.078s 00:06:46.599 sys 0m0.461s 00:06:46.599 00:15:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.599 ************************************ 00:06:46.599 END TEST locking_overlapped_coremask 00:06:46.599 ************************************ 00:06:46.599 00:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:46.599 00:15:33 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:46.599 00:15:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:46.599 00:15:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.599 00:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:46.599 ************************************ 00:06:46.599 START TEST locking_overlapped_coremask_via_rpc 00:06:46.599 ************************************ 00:06:46.599 00:15:33 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:46.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.599 00:15:33 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69888 00:06:46.599 00:15:33 -- event/cpu_locks.sh@149 -- # waitforlisten 69888 /var/tmp/spdk.sock 00:06:46.599 00:15:33 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:46.599 00:15:33 -- common/autotest_common.sh@819 -- # '[' -z 69888 ']' 00:06:46.599 00:15:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.599 00:15:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.599 00:15:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.599 00:15:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.599 00:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:46.600 [2024-07-13 00:15:33.721969] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:46.600 [2024-07-13 00:15:33.722078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69888 ] 00:06:46.880 [2024-07-13 00:15:33.862672] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.880 [2024-07-13 00:15:33.862731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.880 [2024-07-13 00:15:33.951264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.880 [2024-07-13 00:15:33.951559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.880 [2024-07-13 00:15:33.951918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.880 [2024-07-13 00:15:33.951930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.446 00:15:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:47.446 00:15:34 -- common/autotest_common.sh@852 -- # return 0 00:06:47.446 00:15:34 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69918 00:06:47.446 00:15:34 -- event/cpu_locks.sh@153 -- # waitforlisten 69918 /var/tmp/spdk2.sock 00:06:47.446 00:15:34 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:47.446 00:15:34 -- common/autotest_common.sh@819 -- # '[' -z 69918 ']' 00:06:47.446 00:15:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.446 00:15:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:47.446 00:15:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.446 00:15:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:47.446 00:15:34 -- common/autotest_common.sh@10 -- # set +x 00:06:47.704 [2024-07-13 00:15:34.701164] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:47.705 [2024-07-13 00:15:34.701495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69918 ] 00:06:47.705 [2024-07-13 00:15:34.852650] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.705 [2024-07-13 00:15:34.852685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.962 [2024-07-13 00:15:35.011983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.962 [2024-07-13 00:15:35.012270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.962 [2024-07-13 00:15:35.015833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.962 [2024-07-13 00:15:35.015834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:48.527 00:15:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.527 00:15:35 -- common/autotest_common.sh@852 -- # return 0 00:06:48.527 00:15:35 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.527 00:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.527 00:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:48.527 00:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.527 00:15:35 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.527 00:15:35 -- common/autotest_common.sh@640 -- # local es=0 00:06:48.527 00:15:35 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.527 00:15:35 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:48.527 00:15:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:48.527 00:15:35 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:48.527 00:15:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:48.527 00:15:35 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.527 00:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.527 00:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:48.527 [2024-07-13 00:15:35.680807] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69888 has claimed it. 00:06:48.527 2024/07/13 00:15:35 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:48.527 request: 00:06:48.527 { 00:06:48.527 "method": "framework_enable_cpumask_locks", 00:06:48.527 "params": {} 00:06:48.527 } 00:06:48.527 Got JSON-RPC error response 00:06:48.527 GoRPCClient: error on JSON-RPC call 00:06:48.527 00:15:35 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:48.527 00:15:35 -- common/autotest_common.sh@643 -- # es=1 00:06:48.527 00:15:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:48.527 00:15:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:48.527 00:15:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:48.527 00:15:35 -- event/cpu_locks.sh@158 -- # waitforlisten 69888 /var/tmp/spdk.sock 00:06:48.527 00:15:35 -- common/autotest_common.sh@819 -- # '[' -z 69888 ']' 00:06:48.527 00:15:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.527 00:15:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:48.527 00:15:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.527 00:15:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:48.527 00:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:48.784 00:15:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.784 00:15:35 -- common/autotest_common.sh@852 -- # return 0 00:06:48.784 00:15:35 -- event/cpu_locks.sh@159 -- # waitforlisten 69918 /var/tmp/spdk2.sock 00:06:48.784 00:15:35 -- common/autotest_common.sh@819 -- # '[' -z 69918 ']' 00:06:48.784 00:15:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.784 00:15:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:48.784 00:15:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.784 00:15:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:48.784 00:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:49.042 00:15:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:49.042 00:15:36 -- common/autotest_common.sh@852 -- # return 0 00:06:49.042 00:15:36 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:49.042 00:15:36 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.042 ************************************ 00:06:49.042 END TEST locking_overlapped_coremask_via_rpc 00:06:49.042 ************************************ 00:06:49.042 00:15:36 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.042 00:15:36 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.042 00:06:49.042 real 0m2.568s 00:06:49.042 user 0m1.260s 00:06:49.042 sys 0m0.241s 00:06:49.042 00:15:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.042 00:15:36 -- common/autotest_common.sh@10 -- # set +x 00:06:49.042 00:15:36 -- event/cpu_locks.sh@174 -- # cleanup 00:06:49.042 00:15:36 -- event/cpu_locks.sh@15 -- # [[ -z 69888 ]] 00:06:49.042 00:15:36 -- event/cpu_locks.sh@15 -- # killprocess 69888 00:06:49.042 00:15:36 -- common/autotest_common.sh@926 -- # '[' -z 69888 ']' 00:06:49.042 00:15:36 -- common/autotest_common.sh@930 -- # kill -0 69888 00:06:49.042 00:15:36 -- common/autotest_common.sh@931 -- # uname 00:06:49.300 00:15:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:49.300 00:15:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69888 00:06:49.300 killing process with pid 69888 00:06:49.300 00:15:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:49.300 00:15:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:49.300 00:15:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69888' 00:06:49.300 00:15:36 -- common/autotest_common.sh@945 -- # kill 69888 00:06:49.300 00:15:36 -- common/autotest_common.sh@950 -- # wait 69888 00:06:49.558 00:15:36 -- event/cpu_locks.sh@16 -- # [[ -z 69918 ]] 00:06:49.558 00:15:36 -- event/cpu_locks.sh@16 -- # killprocess 69918 00:06:49.558 00:15:36 -- common/autotest_common.sh@926 -- # '[' -z 69918 ']' 00:06:49.558 00:15:36 -- common/autotest_common.sh@930 -- # kill -0 69918 00:06:49.558 00:15:36 -- common/autotest_common.sh@931 -- # uname 00:06:49.558 00:15:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:49.558 00:15:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69918 00:06:49.558 killing process with pid 69918 00:06:49.558 00:15:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:49.558 00:15:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:49.558 00:15:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69918' 00:06:49.558 00:15:36 -- common/autotest_common.sh@945 -- # kill 69918 00:06:49.558 00:15:36 -- common/autotest_common.sh@950 -- # wait 69918 00:06:49.817 00:15:37 -- event/cpu_locks.sh@18 -- # rm -f 00:06:49.817 Process with pid 69888 is not found 00:06:49.817 Process with pid 69918 is not found 00:06:49.817 00:15:37 -- event/cpu_locks.sh@1 -- # cleanup 00:06:49.817 00:15:37 -- event/cpu_locks.sh@15 -- # [[ -z 69888 ]] 00:06:49.817 00:15:37 -- event/cpu_locks.sh@15 -- # killprocess 69888 00:06:49.817 00:15:37 -- common/autotest_common.sh@926 -- # '[' -z 69888 ']' 00:06:49.817 00:15:37 -- common/autotest_common.sh@930 -- # kill -0 69888 00:06:49.817 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (69888) - No such process 00:06:49.817 00:15:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 69888 is not found' 00:06:49.817 00:15:37 -- event/cpu_locks.sh@16 -- # [[ -z 69918 ]] 00:06:49.817 00:15:37 -- event/cpu_locks.sh@16 -- # killprocess 69918 00:06:49.817 00:15:37 -- common/autotest_common.sh@926 -- # '[' -z 69918 ']' 00:06:49.817 00:15:37 -- common/autotest_common.sh@930 -- # kill -0 69918 00:06:49.817 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (69918) - No such process 00:06:49.817 00:15:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 69918 is not found' 00:06:49.817 00:15:37 -- event/cpu_locks.sh@18 -- # rm -f 00:06:49.817 ************************************ 00:06:49.817 END TEST cpu_locks 00:06:49.817 ************************************ 00:06:49.817 00:06:49.817 real 0m20.820s 00:06:49.817 user 0m35.795s 00:06:49.817 sys 0m5.726s 00:06:49.817 00:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.817 00:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:50.076 00:06:50.076 real 0m46.246s 00:06:50.076 user 1m26.770s 00:06:50.076 sys 0m9.615s 00:06:50.076 ************************************ 00:06:50.076 END TEST event 00:06:50.076 ************************************ 00:06:50.076 00:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.076 00:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:50.076 00:15:37 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:50.076 00:15:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:50.076 00:15:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.076 00:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:50.076 ************************************ 00:06:50.076 START TEST thread 00:06:50.076 ************************************ 00:06:50.076 00:15:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:50.076 * Looking for test storage... 00:06:50.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:50.076 00:15:37 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.076 00:15:37 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:50.076 00:15:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.076 00:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:50.076 ************************************ 00:06:50.076 START TEST thread_poller_perf 00:06:50.076 ************************************ 00:06:50.076 00:15:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.076 [2024-07-13 00:15:37.235421] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:50.076 [2024-07-13 00:15:37.235690] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70064 ] 00:06:50.335 [2024-07-13 00:15:37.370857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.335 [2024-07-13 00:15:37.435879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.335 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:51.710 ====================================== 00:06:51.710 busy:2208886314 (cyc) 00:06:51.710 total_run_count: 336000 00:06:51.710 tsc_hz: 2200000000 (cyc) 00:06:51.710 ====================================== 00:06:51.710 poller_cost: 6574 (cyc), 2988 (nsec) 00:06:51.710 ************************************ 00:06:51.710 END TEST thread_poller_perf 00:06:51.710 ************************************ 00:06:51.710 00:06:51.710 real 0m1.295s 00:06:51.710 user 0m1.132s 00:06:51.710 sys 0m0.054s 00:06:51.710 00:15:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.710 00:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:51.710 00:15:38 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.710 00:15:38 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:51.710 00:15:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.710 00:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:51.710 ************************************ 00:06:51.710 START TEST thread_poller_perf 00:06:51.710 ************************************ 00:06:51.710 00:15:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.710 [2024-07-13 00:15:38.585418] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:51.710 [2024-07-13 00:15:38.585507] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70100 ] 00:06:51.710 [2024-07-13 00:15:38.723110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.710 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:51.710 [2024-07-13 00:15:38.776001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.646 ====================================== 00:06:52.646 busy:2202663138 (cyc) 00:06:52.646 total_run_count: 4890000 00:06:52.646 tsc_hz: 2200000000 (cyc) 00:06:52.646 ====================================== 00:06:52.646 poller_cost: 450 (cyc), 204 (nsec) 00:06:52.646 00:06:52.646 real 0m1.291s 00:06:52.646 user 0m1.128s 00:06:52.646 sys 0m0.057s 00:06:52.646 00:15:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.646 00:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:52.646 ************************************ 00:06:52.646 END TEST thread_poller_perf 00:06:52.646 ************************************ 00:06:52.905 00:15:39 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:52.905 ************************************ 00:06:52.905 END TEST thread 00:06:52.905 ************************************ 00:06:52.905 00:06:52.905 real 0m2.767s 00:06:52.905 user 0m2.336s 00:06:52.905 sys 0m0.210s 00:06:52.905 00:15:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.905 00:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:52.905 00:15:39 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:52.905 00:15:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.905 00:15:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.905 00:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:52.905 ************************************ 00:06:52.905 START TEST accel 00:06:52.905 ************************************ 00:06:52.905 00:15:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:52.905 * Looking for test storage... 00:06:52.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:52.905 00:15:40 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:52.905 00:15:40 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:52.905 00:15:40 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:52.905 00:15:40 -- accel/accel.sh@59 -- # spdk_tgt_pid=70168 00:06:52.905 00:15:40 -- accel/accel.sh@60 -- # waitforlisten 70168 00:06:52.905 00:15:40 -- common/autotest_common.sh@819 -- # '[' -z 70168 ']' 00:06:52.905 00:15:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.905 00:15:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:52.906 00:15:40 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:52.906 00:15:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.906 00:15:40 -- accel/accel.sh@58 -- # build_accel_config 00:06:52.906 00:15:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:52.906 00:15:40 -- common/autotest_common.sh@10 -- # set +x 00:06:52.906 00:15:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.906 00:15:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.906 00:15:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.906 00:15:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.906 00:15:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.906 00:15:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.906 00:15:40 -- accel/accel.sh@42 -- # jq -r . 00:06:52.906 [2024-07-13 00:15:40.083447] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:52.906 [2024-07-13 00:15:40.084210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70168 ] 00:06:53.164 [2024-07-13 00:15:40.230745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.164 [2024-07-13 00:15:40.313452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.164 [2024-07-13 00:15:40.313659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.098 00:15:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.098 00:15:41 -- common/autotest_common.sh@852 -- # return 0 00:06:54.098 00:15:41 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:54.098 00:15:41 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:54.098 00:15:41 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:54.098 00:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.098 00:15:41 -- common/autotest_common.sh@10 -- # set +x 00:06:54.098 00:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # IFS== 00:06:54.098 00:15:41 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.098 00:15:41 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.098 00:15:41 -- accel/accel.sh@67 -- # killprocess 70168 00:06:54.098 00:15:41 -- common/autotest_common.sh@926 -- # '[' -z 70168 ']' 00:06:54.098 00:15:41 -- common/autotest_common.sh@930 -- # kill -0 70168 00:06:54.098 00:15:41 -- common/autotest_common.sh@931 -- # uname 00:06:54.098 00:15:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:54.098 00:15:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70168 00:06:54.098 00:15:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:54.098 00:15:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:54.098 killing process with pid 70168 00:06:54.098 00:15:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70168' 00:06:54.098 00:15:41 -- common/autotest_common.sh@945 -- # kill 70168 00:06:54.098 00:15:41 -- common/autotest_common.sh@950 -- # wait 70168 00:06:54.356 00:15:41 -- accel/accel.sh@68 -- # trap - ERR 00:06:54.356 00:15:41 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:54.356 00:15:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:54.356 00:15:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.356 00:15:41 -- common/autotest_common.sh@10 -- # set +x 00:06:54.356 00:15:41 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:54.356 00:15:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:54.356 00:15:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.356 00:15:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.356 00:15:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.356 00:15:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.356 00:15:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.356 00:15:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.356 00:15:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.356 00:15:41 -- accel/accel.sh@42 -- # jq -r . 00:06:54.356 00:15:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.356 00:15:41 -- common/autotest_common.sh@10 -- # set +x 00:06:54.356 00:15:41 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:54.356 00:15:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:54.356 00:15:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.356 00:15:41 -- common/autotest_common.sh@10 -- # set +x 00:06:54.356 ************************************ 00:06:54.356 START TEST accel_missing_filename 00:06:54.356 ************************************ 00:06:54.356 00:15:41 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:54.356 00:15:41 -- common/autotest_common.sh@640 -- # local es=0 00:06:54.356 00:15:41 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:54.356 00:15:41 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:54.356 00:15:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.356 00:15:41 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:54.356 00:15:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.356 00:15:41 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:54.614 00:15:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:54.614 00:15:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.614 00:15:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.614 00:15:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.614 00:15:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.614 00:15:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.614 00:15:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.614 00:15:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.614 00:15:41 -- accel/accel.sh@42 -- # jq -r . 00:06:54.614 [2024-07-13 00:15:41.604859] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:54.614 [2024-07-13 00:15:41.604978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70243 ] 00:06:54.614 [2024-07-13 00:15:41.743689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.614 [2024-07-13 00:15:41.824083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.871 [2024-07-13 00:15:41.881806] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.871 [2024-07-13 00:15:41.963121] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:54.871 A filename is required. 00:06:54.871 00:15:42 -- common/autotest_common.sh@643 -- # es=234 00:06:54.871 00:15:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:54.871 00:15:42 -- common/autotest_common.sh@652 -- # es=106 00:06:54.871 00:15:42 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:54.871 00:15:42 -- common/autotest_common.sh@660 -- # es=1 00:06:54.871 00:15:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:54.871 00:06:54.871 real 0m0.470s 00:06:54.871 user 0m0.306s 00:06:54.871 sys 0m0.113s 00:06:54.871 00:15:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.871 00:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:54.871 ************************************ 00:06:54.871 END TEST accel_missing_filename 00:06:54.871 ************************************ 00:06:54.871 00:15:42 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.871 00:15:42 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:54.871 00:15:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.871 00:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:54.871 ************************************ 00:06:54.871 START TEST accel_compress_verify 00:06:54.871 ************************************ 00:06:54.871 00:15:42 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.871 00:15:42 -- common/autotest_common.sh@640 -- # local es=0 00:06:54.871 00:15:42 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.871 00:15:42 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:55.128 00:15:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:55.128 00:15:42 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:55.128 00:15:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:55.128 00:15:42 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:55.128 00:15:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:55.128 00:15:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.128 00:15:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.128 00:15:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.128 00:15:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.128 00:15:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.128 00:15:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.128 00:15:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.128 00:15:42 -- accel/accel.sh@42 -- # jq -r . 00:06:55.128 [2024-07-13 00:15:42.125289] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:55.128 [2024-07-13 00:15:42.125377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70262 ] 00:06:55.128 [2024-07-13 00:15:42.262127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.128 [2024-07-13 00:15:42.352776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.385 [2024-07-13 00:15:42.409617] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.385 [2024-07-13 00:15:42.488350] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:55.385 00:06:55.385 Compression does not support the verify option, aborting. 00:06:55.385 ************************************ 00:06:55.385 END TEST accel_compress_verify 00:06:55.385 ************************************ 00:06:55.385 00:15:42 -- common/autotest_common.sh@643 -- # es=161 00:06:55.385 00:15:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:55.385 00:15:42 -- common/autotest_common.sh@652 -- # es=33 00:06:55.385 00:15:42 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:55.385 00:15:42 -- common/autotest_common.sh@660 -- # es=1 00:06:55.385 00:15:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:55.385 00:06:55.385 real 0m0.455s 00:06:55.385 user 0m0.282s 00:06:55.385 sys 0m0.119s 00:06:55.385 00:15:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.385 00:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:55.385 00:15:42 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:55.385 00:15:42 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:55.385 00:15:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.385 00:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:55.385 ************************************ 00:06:55.385 START TEST accel_wrong_workload 00:06:55.385 ************************************ 00:06:55.385 00:15:42 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:55.385 00:15:42 -- common/autotest_common.sh@640 -- # local es=0 00:06:55.385 00:15:42 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:55.385 00:15:42 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:55.385 00:15:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:55.385 00:15:42 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:55.385 00:15:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:55.385 00:15:42 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:55.385 00:15:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:55.385 00:15:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.385 00:15:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.385 00:15:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.385 00:15:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.385 00:15:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.385 00:15:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.385 00:15:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.385 00:15:42 -- accel/accel.sh@42 -- # jq -r . 00:06:55.644 Unsupported workload type: foobar 00:06:55.644 [2024-07-13 00:15:42.619111] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:55.644 accel_perf options: 00:06:55.644 [-h help message] 00:06:55.644 [-q queue depth per core] 00:06:55.644 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:55.644 [-T number of threads per core 00:06:55.644 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:55.644 [-t time in seconds] 00:06:55.644 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:55.644 [ dif_verify, , dif_generate, dif_generate_copy 00:06:55.644 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:55.644 [-l for compress/decompress workloads, name of uncompressed input file 00:06:55.644 [-S for crc32c workload, use this seed value (default 0) 00:06:55.644 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:55.644 [-f for fill workload, use this BYTE value (default 255) 00:06:55.644 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:55.644 [-y verify result if this switch is on] 00:06:55.644 [-a tasks to allocate per core (default: same value as -q)] 00:06:55.644 Can be used to spread operations across a wider range of memory. 00:06:55.644 00:15:42 -- common/autotest_common.sh@643 -- # es=1 00:06:55.644 00:15:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:55.644 00:15:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:55.644 00:15:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:55.644 00:06:55.644 real 0m0.028s 00:06:55.644 user 0m0.014s 00:06:55.644 sys 0m0.012s 00:06:55.644 00:15:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.644 00:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:55.644 ************************************ 00:06:55.644 END TEST accel_wrong_workload 00:06:55.644 ************************************ 00:06:55.644 00:15:42 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:55.644 00:15:42 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:55.644 00:15:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.644 00:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:55.644 ************************************ 00:06:55.644 START TEST accel_negative_buffers 00:06:55.644 ************************************ 00:06:55.644 00:15:42 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:55.644 00:15:42 -- common/autotest_common.sh@640 -- # local es=0 00:06:55.644 00:15:42 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:55.644 00:15:42 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:55.644 00:15:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:55.644 00:15:42 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:55.644 00:15:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:55.644 00:15:42 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:55.644 00:15:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:55.644 00:15:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.644 00:15:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.644 00:15:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.644 00:15:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.644 00:15:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.644 00:15:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.644 00:15:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.644 00:15:42 -- accel/accel.sh@42 -- # jq -r . 00:06:55.644 -x option must be non-negative. 00:06:55.644 [2024-07-13 00:15:42.696586] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:55.644 accel_perf options: 00:06:55.644 [-h help message] 00:06:55.644 [-q queue depth per core] 00:06:55.644 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:55.644 [-T number of threads per core 00:06:55.644 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:55.644 [-t time in seconds] 00:06:55.644 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:55.644 [ dif_verify, , dif_generate, dif_generate_copy 00:06:55.644 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:55.644 [-l for compress/decompress workloads, name of uncompressed input file 00:06:55.644 [-S for crc32c workload, use this seed value (default 0) 00:06:55.644 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:55.644 [-f for fill workload, use this BYTE value (default 255) 00:06:55.644 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:55.644 [-y verify result if this switch is on] 00:06:55.644 [-a tasks to allocate per core (default: same value as -q)] 00:06:55.644 Can be used to spread operations across a wider range of memory. 00:06:55.644 00:15:42 -- common/autotest_common.sh@643 -- # es=1 00:06:55.644 00:15:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:55.644 00:15:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:55.644 00:15:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:55.644 00:06:55.644 real 0m0.030s 00:06:55.644 user 0m0.016s 00:06:55.644 sys 0m0.013s 00:06:55.644 00:15:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.644 ************************************ 00:06:55.644 END TEST accel_negative_buffers 00:06:55.644 00:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:55.644 ************************************ 00:06:55.644 00:15:42 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:55.644 00:15:42 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:55.644 00:15:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.644 00:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:55.644 ************************************ 00:06:55.644 START TEST accel_crc32c 00:06:55.644 ************************************ 00:06:55.644 00:15:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:55.644 00:15:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.644 00:15:42 -- accel/accel.sh@17 -- # local accel_module 00:06:55.644 00:15:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:55.644 00:15:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:55.644 00:15:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.644 00:15:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.644 00:15:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.644 00:15:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.644 00:15:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.644 00:15:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.644 00:15:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.644 00:15:42 -- accel/accel.sh@42 -- # jq -r . 00:06:55.644 [2024-07-13 00:15:42.767334] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:55.644 [2024-07-13 00:15:42.767442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70326 ] 00:06:55.901 [2024-07-13 00:15:42.898010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.901 [2024-07-13 00:15:42.973498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.276 00:15:44 -- accel/accel.sh@18 -- # out=' 00:06:57.276 SPDK Configuration: 00:06:57.276 Core mask: 0x1 00:06:57.276 00:06:57.276 Accel Perf Configuration: 00:06:57.276 Workload Type: crc32c 00:06:57.276 CRC-32C seed: 32 00:06:57.276 Transfer size: 4096 bytes 00:06:57.276 Vector count 1 00:06:57.276 Module: software 00:06:57.276 Queue depth: 32 00:06:57.276 Allocate depth: 32 00:06:57.276 # threads/core: 1 00:06:57.276 Run time: 1 seconds 00:06:57.276 Verify: Yes 00:06:57.276 00:06:57.276 Running for 1 seconds... 00:06:57.276 00:06:57.276 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.276 ------------------------------------------------------------------------------------ 00:06:57.276 0,0 504288/s 1969 MiB/s 0 0 00:06:57.276 ==================================================================================== 00:06:57.276 Total 504288/s 1969 MiB/s 0 0' 00:06:57.276 00:15:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.276 00:15:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:57.276 00:15:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.276 00:15:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.276 00:15:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.276 00:15:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.276 00:15:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.276 00:15:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.276 00:15:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.276 00:15:44 -- accel/accel.sh@42 -- # jq -r . 00:06:57.276 [2024-07-13 00:15:44.195265] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:57.276 [2024-07-13 00:15:44.195368] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70342 ] 00:06:57.276 [2024-07-13 00:15:44.326027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.276 [2024-07-13 00:15:44.396384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.276 00:15:44 -- accel/accel.sh@21 -- # val= 00:06:57.276 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.276 00:15:44 -- accel/accel.sh@21 -- # val= 00:06:57.276 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.276 00:15:44 -- accel/accel.sh@21 -- # val=0x1 00:06:57.276 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.276 00:15:44 -- accel/accel.sh@21 -- # val= 00:06:57.276 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.276 00:15:44 -- accel/accel.sh@21 -- # val= 00:06:57.276 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.276 00:15:44 -- accel/accel.sh@21 -- # val=crc32c 00:06:57.276 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.276 00:15:44 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.276 00:15:44 -- accel/accel.sh@21 -- # val=32 00:06:57.276 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.276 00:15:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.276 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.276 00:15:44 -- accel/accel.sh@21 -- # val= 00:06:57.276 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.276 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.276 00:15:44 -- accel/accel.sh@21 -- # val=software 00:06:57.277 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.277 00:15:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.277 00:15:44 -- accel/accel.sh@21 -- # val=32 00:06:57.277 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.277 00:15:44 -- accel/accel.sh@21 -- # val=32 00:06:57.277 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.277 00:15:44 -- accel/accel.sh@21 -- # val=1 00:06:57.277 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.277 00:15:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.277 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.277 00:15:44 -- accel/accel.sh@21 -- # val=Yes 00:06:57.277 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.277 00:15:44 -- accel/accel.sh@21 -- # val= 00:06:57.277 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:57.277 00:15:44 -- accel/accel.sh@21 -- # val= 00:06:57.277 00:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # IFS=: 00:06:57.277 00:15:44 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 00:15:45 -- accel/accel.sh@21 -- # val= 00:06:58.651 00:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.651 00:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:58.651 00:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 00:15:45 -- accel/accel.sh@21 -- # val= 00:06:58.651 00:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.651 00:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:58.651 00:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 00:15:45 -- accel/accel.sh@21 -- # val= 00:06:58.651 00:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.651 00:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:58.651 00:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 00:15:45 -- accel/accel.sh@21 -- # val= 00:06:58.651 00:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.651 00:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:58.651 00:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 00:15:45 -- accel/accel.sh@21 -- # val= 00:06:58.651 00:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.651 00:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:58.651 00:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:58.651 00:15:45 -- accel/accel.sh@21 -- # val= 00:06:58.652 00:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.652 00:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:58.652 00:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:58.652 00:15:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.652 00:15:45 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:58.652 00:15:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.652 00:06:58.652 real 0m2.859s 00:06:58.652 user 0m2.449s 00:06:58.652 sys 0m0.212s 00:06:58.652 00:15:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.652 00:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:58.652 ************************************ 00:06:58.652 END TEST accel_crc32c 00:06:58.652 ************************************ 00:06:58.652 00:15:45 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:58.652 00:15:45 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:58.652 00:15:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.652 00:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:58.652 ************************************ 00:06:58.652 START TEST accel_crc32c_C2 00:06:58.652 ************************************ 00:06:58.652 00:15:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:58.652 00:15:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.652 00:15:45 -- accel/accel.sh@17 -- # local accel_module 00:06:58.652 00:15:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:58.652 00:15:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:58.652 00:15:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.652 00:15:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.652 00:15:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.652 00:15:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.652 00:15:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.652 00:15:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.652 00:15:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.652 00:15:45 -- accel/accel.sh@42 -- # jq -r . 00:06:58.652 [2024-07-13 00:15:45.680423] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:58.652 [2024-07-13 00:15:45.681096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70377 ] 00:06:58.652 [2024-07-13 00:15:45.816838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.910 [2024-07-13 00:15:45.889370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.286 00:15:47 -- accel/accel.sh@18 -- # out=' 00:07:00.286 SPDK Configuration: 00:07:00.286 Core mask: 0x1 00:07:00.286 00:07:00.286 Accel Perf Configuration: 00:07:00.286 Workload Type: crc32c 00:07:00.286 CRC-32C seed: 0 00:07:00.286 Transfer size: 4096 bytes 00:07:00.286 Vector count 2 00:07:00.286 Module: software 00:07:00.286 Queue depth: 32 00:07:00.286 Allocate depth: 32 00:07:00.286 # threads/core: 1 00:07:00.286 Run time: 1 seconds 00:07:00.286 Verify: Yes 00:07:00.286 00:07:00.286 Running for 1 seconds... 00:07:00.286 00:07:00.286 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.286 ------------------------------------------------------------------------------------ 00:07:00.286 0,0 355872/s 2780 MiB/s 0 0 00:07:00.286 ==================================================================================== 00:07:00.286 Total 355872/s 1390 MiB/s 0 0' 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.286 00:15:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:00.286 00:15:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.286 00:15:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.286 00:15:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.286 00:15:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.286 00:15:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.286 00:15:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.286 00:15:47 -- accel/accel.sh@42 -- # jq -r . 00:07:00.286 [2024-07-13 00:15:47.123944] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:00.286 [2024-07-13 00:15:47.124040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70396 ] 00:07:00.286 [2024-07-13 00:15:47.259477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.286 [2024-07-13 00:15:47.330680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val= 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val= 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val=0x1 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val= 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val= 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val=crc32c 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val=0 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val= 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val=software 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val=32 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val=32 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val=1 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val=Yes 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val= 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:00.286 00:15:47 -- accel/accel.sh@21 -- # val= 00:07:00.286 00:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:00.286 00:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 00:15:48 -- accel/accel.sh@21 -- # val= 00:07:01.661 00:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 00:15:48 -- accel/accel.sh@21 -- # val= 00:07:01.661 00:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 00:15:48 -- accel/accel.sh@21 -- # val= 00:07:01.661 00:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 00:15:48 -- accel/accel.sh@21 -- # val= 00:07:01.661 00:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 00:15:48 -- accel/accel.sh@21 -- # val= 00:07:01.661 00:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 ************************************ 00:07:01.661 END TEST accel_crc32c_C2 00:07:01.661 ************************************ 00:07:01.661 00:15:48 -- accel/accel.sh@21 -- # val= 00:07:01.661 00:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:01.661 00:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:01.661 00:15:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.661 00:15:48 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:01.661 00:15:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.661 00:07:01.661 real 0m2.898s 00:07:01.661 user 0m2.466s 00:07:01.661 sys 0m0.232s 00:07:01.661 00:15:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.661 00:15:48 -- common/autotest_common.sh@10 -- # set +x 00:07:01.661 00:15:48 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:01.661 00:15:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:01.661 00:15:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.661 00:15:48 -- common/autotest_common.sh@10 -- # set +x 00:07:01.661 ************************************ 00:07:01.661 START TEST accel_copy 00:07:01.661 ************************************ 00:07:01.661 00:15:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:01.661 00:15:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.661 00:15:48 -- accel/accel.sh@17 -- # local accel_module 00:07:01.661 00:15:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:01.661 00:15:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:01.661 00:15:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.661 00:15:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.661 00:15:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.661 00:15:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.661 00:15:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.661 00:15:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.661 00:15:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.661 00:15:48 -- accel/accel.sh@42 -- # jq -r . 00:07:01.661 [2024-07-13 00:15:48.627694] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:01.661 [2024-07-13 00:15:48.627773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70431 ] 00:07:01.661 [2024-07-13 00:15:48.758748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.661 [2024-07-13 00:15:48.856686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.043 00:15:50 -- accel/accel.sh@18 -- # out=' 00:07:03.043 SPDK Configuration: 00:07:03.043 Core mask: 0x1 00:07:03.043 00:07:03.043 Accel Perf Configuration: 00:07:03.043 Workload Type: copy 00:07:03.044 Transfer size: 4096 bytes 00:07:03.044 Vector count 1 00:07:03.044 Module: software 00:07:03.044 Queue depth: 32 00:07:03.044 Allocate depth: 32 00:07:03.044 # threads/core: 1 00:07:03.044 Run time: 1 seconds 00:07:03.044 Verify: Yes 00:07:03.044 00:07:03.044 Running for 1 seconds... 00:07:03.044 00:07:03.044 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.044 ------------------------------------------------------------------------------------ 00:07:03.044 0,0 307648/s 1201 MiB/s 0 0 00:07:03.044 ==================================================================================== 00:07:03.044 Total 307648/s 1201 MiB/s 0 0' 00:07:03.044 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.044 00:15:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:03.044 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.044 00:15:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:03.044 00:15:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.044 00:15:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.044 00:15:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.044 00:15:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.044 00:15:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.044 00:15:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.044 00:15:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.044 00:15:50 -- accel/accel.sh@42 -- # jq -r . 00:07:03.044 [2024-07-13 00:15:50.085964] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:03.044 [2024-07-13 00:15:50.086065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70450 ] 00:07:03.044 [2024-07-13 00:15:50.217629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.302 [2024-07-13 00:15:50.313070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val= 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val= 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val=0x1 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val= 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val= 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val=copy 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val= 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val=software 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val=32 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val=32 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val=1 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val=Yes 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val= 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:03.302 00:15:50 -- accel/accel.sh@21 -- # val= 00:07:03.302 00:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:03.302 00:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.679 00:15:51 -- accel/accel.sh@21 -- # val= 00:07:04.679 00:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # IFS=: 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # read -r var val 00:07:04.679 00:15:51 -- accel/accel.sh@21 -- # val= 00:07:04.679 00:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # IFS=: 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # read -r var val 00:07:04.679 00:15:51 -- accel/accel.sh@21 -- # val= 00:07:04.679 00:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # IFS=: 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # read -r var val 00:07:04.679 00:15:51 -- accel/accel.sh@21 -- # val= 00:07:04.679 00:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # IFS=: 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # read -r var val 00:07:04.679 00:15:51 -- accel/accel.sh@21 -- # val= 00:07:04.679 00:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # IFS=: 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # read -r var val 00:07:04.679 00:15:51 -- accel/accel.sh@21 -- # val= 00:07:04.679 00:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # IFS=: 00:07:04.679 00:15:51 -- accel/accel.sh@20 -- # read -r var val 00:07:04.679 00:15:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.679 00:15:51 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:04.679 00:15:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.679 00:07:04.679 real 0m2.924s 00:07:04.679 user 0m2.501s 00:07:04.679 sys 0m0.219s 00:07:04.679 ************************************ 00:07:04.679 END TEST accel_copy 00:07:04.679 ************************************ 00:07:04.679 00:15:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.679 00:15:51 -- common/autotest_common.sh@10 -- # set +x 00:07:04.679 00:15:51 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.679 00:15:51 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:04.679 00:15:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.679 00:15:51 -- common/autotest_common.sh@10 -- # set +x 00:07:04.679 ************************************ 00:07:04.679 START TEST accel_fill 00:07:04.679 ************************************ 00:07:04.679 00:15:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.679 00:15:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.679 00:15:51 -- accel/accel.sh@17 -- # local accel_module 00:07:04.679 00:15:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.679 00:15:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.679 00:15:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.679 00:15:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.679 00:15:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.679 00:15:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.679 00:15:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.679 00:15:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.679 00:15:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.679 00:15:51 -- accel/accel.sh@42 -- # jq -r . 00:07:04.679 [2024-07-13 00:15:51.606932] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:04.679 [2024-07-13 00:15:51.607034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70485 ] 00:07:04.679 [2024-07-13 00:15:51.742210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.679 [2024-07-13 00:15:51.838257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.052 00:15:53 -- accel/accel.sh@18 -- # out=' 00:07:06.052 SPDK Configuration: 00:07:06.052 Core mask: 0x1 00:07:06.052 00:07:06.052 Accel Perf Configuration: 00:07:06.052 Workload Type: fill 00:07:06.052 Fill pattern: 0x80 00:07:06.052 Transfer size: 4096 bytes 00:07:06.052 Vector count 1 00:07:06.052 Module: software 00:07:06.052 Queue depth: 64 00:07:06.052 Allocate depth: 64 00:07:06.052 # threads/core: 1 00:07:06.052 Run time: 1 seconds 00:07:06.052 Verify: Yes 00:07:06.052 00:07:06.052 Running for 1 seconds... 00:07:06.052 00:07:06.052 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.052 ------------------------------------------------------------------------------------ 00:07:06.052 0,0 463680/s 1811 MiB/s 0 0 00:07:06.052 ==================================================================================== 00:07:06.053 Total 463680/s 1811 MiB/s 0 0' 00:07:06.053 00:15:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.053 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.053 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.053 00:15:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.053 00:15:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.053 00:15:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.053 00:15:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.053 00:15:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.053 00:15:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.053 00:15:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.053 00:15:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.053 00:15:53 -- accel/accel.sh@42 -- # jq -r . 00:07:06.053 [2024-07-13 00:15:53.076578] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:06.053 [2024-07-13 00:15:53.077333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70503 ] 00:07:06.053 [2024-07-13 00:15:53.217663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.310 [2024-07-13 00:15:53.317207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.310 00:15:53 -- accel/accel.sh@21 -- # val= 00:07:06.310 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.310 00:15:53 -- accel/accel.sh@21 -- # val= 00:07:06.310 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.310 00:15:53 -- accel/accel.sh@21 -- # val=0x1 00:07:06.310 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.310 00:15:53 -- accel/accel.sh@21 -- # val= 00:07:06.310 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.310 00:15:53 -- accel/accel.sh@21 -- # val= 00:07:06.310 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.310 00:15:53 -- accel/accel.sh@21 -- # val=fill 00:07:06.310 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.310 00:15:53 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.310 00:15:53 -- accel/accel.sh@21 -- # val=0x80 00:07:06.310 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.310 00:15:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.310 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.310 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.311 00:15:53 -- accel/accel.sh@21 -- # val= 00:07:06.311 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.311 00:15:53 -- accel/accel.sh@21 -- # val=software 00:07:06.311 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.311 00:15:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.311 00:15:53 -- accel/accel.sh@21 -- # val=64 00:07:06.311 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.311 00:15:53 -- accel/accel.sh@21 -- # val=64 00:07:06.311 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.311 00:15:53 -- accel/accel.sh@21 -- # val=1 00:07:06.311 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.311 00:15:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.311 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.311 00:15:53 -- accel/accel.sh@21 -- # val=Yes 00:07:06.311 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.311 00:15:53 -- accel/accel.sh@21 -- # val= 00:07:06.311 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:06.311 00:15:53 -- accel/accel.sh@21 -- # val= 00:07:06.311 00:15:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # IFS=: 00:07:06.311 00:15:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.685 00:15:54 -- accel/accel.sh@21 -- # val= 00:07:07.685 00:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # IFS=: 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # read -r var val 00:07:07.685 00:15:54 -- accel/accel.sh@21 -- # val= 00:07:07.685 00:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # IFS=: 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # read -r var val 00:07:07.685 00:15:54 -- accel/accel.sh@21 -- # val= 00:07:07.685 00:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # IFS=: 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # read -r var val 00:07:07.685 00:15:54 -- accel/accel.sh@21 -- # val= 00:07:07.685 00:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # IFS=: 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # read -r var val 00:07:07.685 00:15:54 -- accel/accel.sh@21 -- # val= 00:07:07.685 00:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # IFS=: 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # read -r var val 00:07:07.685 00:15:54 -- accel/accel.sh@21 -- # val= 00:07:07.685 00:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # IFS=: 00:07:07.685 00:15:54 -- accel/accel.sh@20 -- # read -r var val 00:07:07.685 00:15:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.685 00:15:54 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:07.685 00:15:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.685 00:07:07.685 real 0m2.958s 00:07:07.685 user 0m2.531s 00:07:07.685 sys 0m0.221s 00:07:07.685 00:15:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.685 ************************************ 00:07:07.685 END TEST accel_fill 00:07:07.685 ************************************ 00:07:07.685 00:15:54 -- common/autotest_common.sh@10 -- # set +x 00:07:07.685 00:15:54 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:07.685 00:15:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:07.685 00:15:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.686 00:15:54 -- common/autotest_common.sh@10 -- # set +x 00:07:07.686 ************************************ 00:07:07.686 START TEST accel_copy_crc32c 00:07:07.686 ************************************ 00:07:07.686 00:15:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:07.686 00:15:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.686 00:15:54 -- accel/accel.sh@17 -- # local accel_module 00:07:07.686 00:15:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:07.686 00:15:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:07.686 00:15:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.686 00:15:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.686 00:15:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.686 00:15:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.686 00:15:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.686 00:15:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.686 00:15:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.686 00:15:54 -- accel/accel.sh@42 -- # jq -r . 00:07:07.686 [2024-07-13 00:15:54.617467] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:07.686 [2024-07-13 00:15:54.618346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70539 ] 00:07:07.686 [2024-07-13 00:15:54.761048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.686 [2024-07-13 00:15:54.857466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.058 00:15:56 -- accel/accel.sh@18 -- # out=' 00:07:09.058 SPDK Configuration: 00:07:09.058 Core mask: 0x1 00:07:09.058 00:07:09.058 Accel Perf Configuration: 00:07:09.058 Workload Type: copy_crc32c 00:07:09.058 CRC-32C seed: 0 00:07:09.058 Vector size: 4096 bytes 00:07:09.058 Transfer size: 4096 bytes 00:07:09.058 Vector count 1 00:07:09.058 Module: software 00:07:09.058 Queue depth: 32 00:07:09.058 Allocate depth: 32 00:07:09.058 # threads/core: 1 00:07:09.058 Run time: 1 seconds 00:07:09.058 Verify: Yes 00:07:09.058 00:07:09.058 Running for 1 seconds... 00:07:09.058 00:07:09.058 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.058 ------------------------------------------------------------------------------------ 00:07:09.058 0,0 230976/s 902 MiB/s 0 0 00:07:09.058 ==================================================================================== 00:07:09.058 Total 230976/s 902 MiB/s 0 0' 00:07:09.058 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.058 00:15:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:09.058 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.058 00:15:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:09.058 00:15:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.058 00:15:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.058 00:15:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.058 00:15:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.058 00:15:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.058 00:15:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.058 00:15:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.058 00:15:56 -- accel/accel.sh@42 -- # jq -r . 00:07:09.058 [2024-07-13 00:15:56.091492] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:09.058 [2024-07-13 00:15:56.091596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70553 ] 00:07:09.058 [2024-07-13 00:15:56.222791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.316 [2024-07-13 00:15:56.319207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val= 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val= 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val=0x1 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val= 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val= 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val=0 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val= 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val=software 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val=32 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val=32 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val=1 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val=Yes 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val= 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 00:15:56 -- accel/accel.sh@21 -- # val= 00:07:09.316 00:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 00:15:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.690 00:15:57 -- accel/accel.sh@21 -- # val= 00:07:10.690 00:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # IFS=: 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # read -r var val 00:07:10.690 00:15:57 -- accel/accel.sh@21 -- # val= 00:07:10.690 00:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # IFS=: 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # read -r var val 00:07:10.690 00:15:57 -- accel/accel.sh@21 -- # val= 00:07:10.690 00:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # IFS=: 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # read -r var val 00:07:10.690 00:15:57 -- accel/accel.sh@21 -- # val= 00:07:10.690 00:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # IFS=: 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # read -r var val 00:07:10.690 00:15:57 -- accel/accel.sh@21 -- # val= 00:07:10.690 00:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # IFS=: 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # read -r var val 00:07:10.690 00:15:57 -- accel/accel.sh@21 -- # val= 00:07:10.690 00:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # IFS=: 00:07:10.690 00:15:57 -- accel/accel.sh@20 -- # read -r var val 00:07:10.690 00:15:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.690 00:15:57 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:10.690 00:15:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.690 00:07:10.690 real 0m2.930s 00:07:10.690 user 0m2.506s 00:07:10.690 sys 0m0.221s 00:07:10.690 00:15:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.690 ************************************ 00:07:10.690 END TEST accel_copy_crc32c 00:07:10.690 ************************************ 00:07:10.690 00:15:57 -- common/autotest_common.sh@10 -- # set +x 00:07:10.690 00:15:57 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.690 00:15:57 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:10.690 00:15:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.690 00:15:57 -- common/autotest_common.sh@10 -- # set +x 00:07:10.690 ************************************ 00:07:10.690 START TEST accel_copy_crc32c_C2 00:07:10.690 ************************************ 00:07:10.690 00:15:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.690 00:15:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.690 00:15:57 -- accel/accel.sh@17 -- # local accel_module 00:07:10.690 00:15:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:10.690 00:15:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:10.690 00:15:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.690 00:15:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.690 00:15:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.690 00:15:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.690 00:15:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.690 00:15:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.690 00:15:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.690 00:15:57 -- accel/accel.sh@42 -- # jq -r . 00:07:10.690 [2024-07-13 00:15:57.595294] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:10.690 [2024-07-13 00:15:57.595390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70593 ] 00:07:10.690 [2024-07-13 00:15:57.730325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.690 [2024-07-13 00:15:57.818751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.090 00:15:59 -- accel/accel.sh@18 -- # out=' 00:07:12.090 SPDK Configuration: 00:07:12.090 Core mask: 0x1 00:07:12.090 00:07:12.090 Accel Perf Configuration: 00:07:12.090 Workload Type: copy_crc32c 00:07:12.090 CRC-32C seed: 0 00:07:12.090 Vector size: 4096 bytes 00:07:12.090 Transfer size: 8192 bytes 00:07:12.090 Vector count 2 00:07:12.090 Module: software 00:07:12.090 Queue depth: 32 00:07:12.090 Allocate depth: 32 00:07:12.090 # threads/core: 1 00:07:12.090 Run time: 1 seconds 00:07:12.090 Verify: Yes 00:07:12.090 00:07:12.090 Running for 1 seconds... 00:07:12.090 00:07:12.090 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.090 ------------------------------------------------------------------------------------ 00:07:12.090 0,0 187200/s 1462 MiB/s 0 0 00:07:12.090 ==================================================================================== 00:07:12.090 Total 187200/s 731 MiB/s 0 0' 00:07:12.090 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.090 00:15:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:12.090 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.090 00:15:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:12.090 00:15:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.090 00:15:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.090 00:15:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.090 00:15:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.090 00:15:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.090 00:15:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.090 00:15:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.090 00:15:59 -- accel/accel.sh@42 -- # jq -r . 00:07:12.090 [2024-07-13 00:15:59.049156] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:12.090 [2024-07-13 00:15:59.049235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70607 ] 00:07:12.090 [2024-07-13 00:15:59.179643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.090 [2024-07-13 00:15:59.270172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val= 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val= 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val=0x1 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val= 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val= 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val=0 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val= 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val=software 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val=32 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val=32 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val=1 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val=Yes 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val= 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:12.349 00:15:59 -- accel/accel.sh@21 -- # val= 00:07:12.349 00:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # IFS=: 00:07:12.349 00:15:59 -- accel/accel.sh@20 -- # read -r var val 00:07:13.285 00:16:00 -- accel/accel.sh@21 -- # val= 00:07:13.285 00:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:13.285 00:16:00 -- accel/accel.sh@21 -- # val= 00:07:13.285 00:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:13.285 00:16:00 -- accel/accel.sh@21 -- # val= 00:07:13.285 00:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:13.285 00:16:00 -- accel/accel.sh@21 -- # val= 00:07:13.285 00:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:13.285 00:16:00 -- accel/accel.sh@21 -- # val= 00:07:13.285 00:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:13.285 00:16:00 -- accel/accel.sh@21 -- # val= 00:07:13.285 00:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # IFS=: 00:07:13.285 ************************************ 00:07:13.285 END TEST accel_copy_crc32c_C2 00:07:13.285 ************************************ 00:07:13.285 00:16:00 -- accel/accel.sh@20 -- # read -r var val 00:07:13.285 00:16:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.285 00:16:00 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:13.285 00:16:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.285 00:07:13.285 real 0m2.915s 00:07:13.285 user 0m2.502s 00:07:13.285 sys 0m0.213s 00:07:13.285 00:16:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.285 00:16:00 -- common/autotest_common.sh@10 -- # set +x 00:07:13.543 00:16:00 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:13.543 00:16:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:13.543 00:16:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.543 00:16:00 -- common/autotest_common.sh@10 -- # set +x 00:07:13.543 ************************************ 00:07:13.543 START TEST accel_dualcast 00:07:13.543 ************************************ 00:07:13.543 00:16:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:13.543 00:16:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.543 00:16:00 -- accel/accel.sh@17 -- # local accel_module 00:07:13.543 00:16:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:13.543 00:16:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:13.543 00:16:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.543 00:16:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.543 00:16:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.543 00:16:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.543 00:16:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.543 00:16:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.543 00:16:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.543 00:16:00 -- accel/accel.sh@42 -- # jq -r . 00:07:13.543 [2024-07-13 00:16:00.568417] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:13.543 [2024-07-13 00:16:00.568512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70642 ] 00:07:13.543 [2024-07-13 00:16:00.709481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.801 [2024-07-13 00:16:00.801227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.177 00:16:02 -- accel/accel.sh@18 -- # out=' 00:07:15.177 SPDK Configuration: 00:07:15.177 Core mask: 0x1 00:07:15.177 00:07:15.177 Accel Perf Configuration: 00:07:15.177 Workload Type: dualcast 00:07:15.177 Transfer size: 4096 bytes 00:07:15.177 Vector count 1 00:07:15.177 Module: software 00:07:15.177 Queue depth: 32 00:07:15.177 Allocate depth: 32 00:07:15.177 # threads/core: 1 00:07:15.177 Run time: 1 seconds 00:07:15.177 Verify: Yes 00:07:15.177 00:07:15.177 Running for 1 seconds... 00:07:15.177 00:07:15.177 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.177 ------------------------------------------------------------------------------------ 00:07:15.177 0,0 361056/s 1410 MiB/s 0 0 00:07:15.177 ==================================================================================== 00:07:15.177 Total 361056/s 1410 MiB/s 0 0' 00:07:15.177 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.177 00:16:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:15.177 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:15.178 00:16:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.178 00:16:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.178 00:16:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.178 00:16:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.178 00:16:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.178 00:16:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.178 00:16:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.178 00:16:02 -- accel/accel.sh@42 -- # jq -r . 00:07:15.178 [2024-07-13 00:16:02.052309] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:15.178 [2024-07-13 00:16:02.052410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70662 ] 00:07:15.178 [2024-07-13 00:16:02.189398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.178 [2024-07-13 00:16:02.282690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val= 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val= 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val=0x1 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val= 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val= 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val=dualcast 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val= 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val=software 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val=32 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val=32 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val=1 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val=Yes 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val= 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:15.178 00:16:02 -- accel/accel.sh@21 -- # val= 00:07:15.178 00:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # IFS=: 00:07:15.178 00:16:02 -- accel/accel.sh@20 -- # read -r var val 00:07:16.567 00:16:03 -- accel/accel.sh@21 -- # val= 00:07:16.567 00:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:16.567 00:16:03 -- accel/accel.sh@21 -- # val= 00:07:16.567 00:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:16.567 00:16:03 -- accel/accel.sh@21 -- # val= 00:07:16.567 00:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:16.567 00:16:03 -- accel/accel.sh@21 -- # val= 00:07:16.567 00:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:16.567 00:16:03 -- accel/accel.sh@21 -- # val= 00:07:16.567 00:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:16.567 00:16:03 -- accel/accel.sh@21 -- # val= 00:07:16.567 00:16:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # IFS=: 00:07:16.567 00:16:03 -- accel/accel.sh@20 -- # read -r var val 00:07:16.567 00:16:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.567 00:16:03 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:16.567 00:16:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.567 00:07:16.567 real 0m2.958s 00:07:16.567 user 0m2.525s 00:07:16.567 sys 0m0.230s 00:07:16.567 00:16:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.567 00:16:03 -- common/autotest_common.sh@10 -- # set +x 00:07:16.567 ************************************ 00:07:16.567 END TEST accel_dualcast 00:07:16.567 ************************************ 00:07:16.567 00:16:03 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:16.567 00:16:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:16.567 00:16:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.567 00:16:03 -- common/autotest_common.sh@10 -- # set +x 00:07:16.567 ************************************ 00:07:16.567 START TEST accel_compare 00:07:16.567 ************************************ 00:07:16.567 00:16:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:16.567 00:16:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.567 00:16:03 -- accel/accel.sh@17 -- # local accel_module 00:07:16.567 00:16:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:16.567 00:16:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:16.567 00:16:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.567 00:16:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.567 00:16:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.567 00:16:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.567 00:16:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.567 00:16:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.567 00:16:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.567 00:16:03 -- accel/accel.sh@42 -- # jq -r . 00:07:16.567 [2024-07-13 00:16:03.567954] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:16.567 [2024-07-13 00:16:03.568043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70695 ] 00:07:16.567 [2024-07-13 00:16:03.700744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.567 [2024-07-13 00:16:03.792581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.944 00:16:04 -- accel/accel.sh@18 -- # out=' 00:07:17.944 SPDK Configuration: 00:07:17.944 Core mask: 0x1 00:07:17.944 00:07:17.944 Accel Perf Configuration: 00:07:17.944 Workload Type: compare 00:07:17.944 Transfer size: 4096 bytes 00:07:17.944 Vector count 1 00:07:17.944 Module: software 00:07:17.944 Queue depth: 32 00:07:17.944 Allocate depth: 32 00:07:17.944 # threads/core: 1 00:07:17.944 Run time: 1 seconds 00:07:17.944 Verify: Yes 00:07:17.944 00:07:17.944 Running for 1 seconds... 00:07:17.944 00:07:17.944 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.944 ------------------------------------------------------------------------------------ 00:07:17.944 0,0 458208/s 1789 MiB/s 0 0 00:07:17.944 ==================================================================================== 00:07:17.944 Total 458208/s 1789 MiB/s 0 0' 00:07:17.944 00:16:04 -- accel/accel.sh@20 -- # IFS=: 00:07:17.944 00:16:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:17.944 00:16:04 -- accel/accel.sh@20 -- # read -r var val 00:07:17.944 00:16:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:17.944 00:16:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.944 00:16:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.944 00:16:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.944 00:16:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.944 00:16:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.944 00:16:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.944 00:16:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.944 00:16:04 -- accel/accel.sh@42 -- # jq -r . 00:07:17.944 [2024-07-13 00:16:05.016406] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:17.944 [2024-07-13 00:16:05.016497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70716 ] 00:07:17.944 [2024-07-13 00:16:05.151406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.203 [2024-07-13 00:16:05.238188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.203 00:16:05 -- accel/accel.sh@21 -- # val= 00:07:18.203 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.203 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.203 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.203 00:16:05 -- accel/accel.sh@21 -- # val= 00:07:18.203 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.203 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.203 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.203 00:16:05 -- accel/accel.sh@21 -- # val=0x1 00:07:18.203 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.203 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.203 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.203 00:16:05 -- accel/accel.sh@21 -- # val= 00:07:18.203 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.203 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.203 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.203 00:16:05 -- accel/accel.sh@21 -- # val= 00:07:18.203 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.203 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.203 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.203 00:16:05 -- accel/accel.sh@21 -- # val=compare 00:07:18.203 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.204 00:16:05 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.204 00:16:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.204 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.204 00:16:05 -- accel/accel.sh@21 -- # val= 00:07:18.204 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.204 00:16:05 -- accel/accel.sh@21 -- # val=software 00:07:18.204 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.204 00:16:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.204 00:16:05 -- accel/accel.sh@21 -- # val=32 00:07:18.204 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.204 00:16:05 -- accel/accel.sh@21 -- # val=32 00:07:18.204 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.204 00:16:05 -- accel/accel.sh@21 -- # val=1 00:07:18.204 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.204 00:16:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.204 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.204 00:16:05 -- accel/accel.sh@21 -- # val=Yes 00:07:18.204 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.204 00:16:05 -- accel/accel.sh@21 -- # val= 00:07:18.204 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:18.204 00:16:05 -- accel/accel.sh@21 -- # val= 00:07:18.204 00:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # IFS=: 00:07:18.204 00:16:05 -- accel/accel.sh@20 -- # read -r var val 00:07:19.581 00:16:06 -- accel/accel.sh@21 -- # val= 00:07:19.581 00:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.581 00:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:19.581 00:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:19.581 00:16:06 -- accel/accel.sh@21 -- # val= 00:07:19.581 00:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.581 00:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:19.581 00:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:19.581 00:16:06 -- accel/accel.sh@21 -- # val= 00:07:19.581 00:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.581 00:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:19.581 00:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:19.581 00:16:06 -- accel/accel.sh@21 -- # val= 00:07:19.581 00:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.581 00:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:19.581 00:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:19.581 00:16:06 -- accel/accel.sh@21 -- # val= 00:07:19.581 00:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.582 00:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:19.582 00:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:19.582 00:16:06 -- accel/accel.sh@21 -- # val= 00:07:19.582 00:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.582 00:16:06 -- accel/accel.sh@20 -- # IFS=: 00:07:19.582 00:16:06 -- accel/accel.sh@20 -- # read -r var val 00:07:19.582 00:16:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.582 00:16:06 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:19.582 00:16:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.582 00:07:19.582 real 0m2.900s 00:07:19.582 user 0m2.481s 00:07:19.582 sys 0m0.217s 00:07:19.582 00:16:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.582 00:16:06 -- common/autotest_common.sh@10 -- # set +x 00:07:19.582 ************************************ 00:07:19.582 END TEST accel_compare 00:07:19.582 ************************************ 00:07:19.582 00:16:06 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:19.582 00:16:06 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:19.582 00:16:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.582 00:16:06 -- common/autotest_common.sh@10 -- # set +x 00:07:19.582 ************************************ 00:07:19.582 START TEST accel_xor 00:07:19.582 ************************************ 00:07:19.582 00:16:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:19.582 00:16:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.582 00:16:06 -- accel/accel.sh@17 -- # local accel_module 00:07:19.582 00:16:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:19.582 00:16:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:19.582 00:16:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.582 00:16:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.582 00:16:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.582 00:16:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.582 00:16:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.582 00:16:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.582 00:16:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.582 00:16:06 -- accel/accel.sh@42 -- # jq -r . 00:07:19.582 [2024-07-13 00:16:06.524093] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:19.582 [2024-07-13 00:16:06.524205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70745 ] 00:07:19.582 [2024-07-13 00:16:06.660817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.582 [2024-07-13 00:16:06.748366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.967 00:16:07 -- accel/accel.sh@18 -- # out=' 00:07:20.967 SPDK Configuration: 00:07:20.967 Core mask: 0x1 00:07:20.967 00:07:20.967 Accel Perf Configuration: 00:07:20.967 Workload Type: xor 00:07:20.967 Source buffers: 2 00:07:20.967 Transfer size: 4096 bytes 00:07:20.967 Vector count 1 00:07:20.967 Module: software 00:07:20.967 Queue depth: 32 00:07:20.967 Allocate depth: 32 00:07:20.967 # threads/core: 1 00:07:20.967 Run time: 1 seconds 00:07:20.967 Verify: Yes 00:07:20.967 00:07:20.967 Running for 1 seconds... 00:07:20.967 00:07:20.967 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.967 ------------------------------------------------------------------------------------ 00:07:20.967 0,0 258720/s 1010 MiB/s 0 0 00:07:20.967 ==================================================================================== 00:07:20.967 Total 258720/s 1010 MiB/s 0 0' 00:07:20.967 00:16:07 -- accel/accel.sh@20 -- # IFS=: 00:07:20.967 00:16:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:20.967 00:16:07 -- accel/accel.sh@20 -- # read -r var val 00:07:20.967 00:16:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:20.967 00:16:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.967 00:16:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.967 00:16:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.967 00:16:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.967 00:16:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.967 00:16:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.967 00:16:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.967 00:16:07 -- accel/accel.sh@42 -- # jq -r . 00:07:20.967 [2024-07-13 00:16:07.969983] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:20.967 [2024-07-13 00:16:07.970085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70770 ] 00:07:20.967 [2024-07-13 00:16:08.104863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.968 [2024-07-13 00:16:08.185157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.225 00:16:08 -- accel/accel.sh@21 -- # val= 00:07:21.225 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.225 00:16:08 -- accel/accel.sh@21 -- # val= 00:07:21.225 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.225 00:16:08 -- accel/accel.sh@21 -- # val=0x1 00:07:21.225 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.225 00:16:08 -- accel/accel.sh@21 -- # val= 00:07:21.225 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.225 00:16:08 -- accel/accel.sh@21 -- # val= 00:07:21.225 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.225 00:16:08 -- accel/accel.sh@21 -- # val=xor 00:07:21.225 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.225 00:16:08 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.225 00:16:08 -- accel/accel.sh@21 -- # val=2 00:07:21.225 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.225 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.225 00:16:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.226 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.226 00:16:08 -- accel/accel.sh@21 -- # val= 00:07:21.226 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.226 00:16:08 -- accel/accel.sh@21 -- # val=software 00:07:21.226 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.226 00:16:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.226 00:16:08 -- accel/accel.sh@21 -- # val=32 00:07:21.226 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.226 00:16:08 -- accel/accel.sh@21 -- # val=32 00:07:21.226 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.226 00:16:08 -- accel/accel.sh@21 -- # val=1 00:07:21.226 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.226 00:16:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.226 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.226 00:16:08 -- accel/accel.sh@21 -- # val=Yes 00:07:21.226 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.226 00:16:08 -- accel/accel.sh@21 -- # val= 00:07:21.226 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:21.226 00:16:08 -- accel/accel.sh@21 -- # val= 00:07:21.226 00:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # IFS=: 00:07:21.226 00:16:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.162 00:16:09 -- accel/accel.sh@21 -- # val= 00:07:22.162 00:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:22.162 00:16:09 -- accel/accel.sh@21 -- # val= 00:07:22.162 00:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:22.162 00:16:09 -- accel/accel.sh@21 -- # val= 00:07:22.162 00:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:22.162 00:16:09 -- accel/accel.sh@21 -- # val= 00:07:22.162 00:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:22.162 00:16:09 -- accel/accel.sh@21 -- # val= 00:07:22.162 00:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:22.162 00:16:09 -- accel/accel.sh@21 -- # val= 00:07:22.162 00:16:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # IFS=: 00:07:22.162 00:16:09 -- accel/accel.sh@20 -- # read -r var val 00:07:22.162 00:16:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.162 00:16:09 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:22.162 00:16:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.162 00:07:22.162 real 0m2.890s 00:07:22.162 user 0m2.463s 00:07:22.162 sys 0m0.222s 00:07:22.162 00:16:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.162 00:16:09 -- common/autotest_common.sh@10 -- # set +x 00:07:22.162 ************************************ 00:07:22.162 END TEST accel_xor 00:07:22.162 ************************************ 00:07:22.420 00:16:09 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:22.420 00:16:09 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:22.420 00:16:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.420 00:16:09 -- common/autotest_common.sh@10 -- # set +x 00:07:22.420 ************************************ 00:07:22.420 START TEST accel_xor 00:07:22.420 ************************************ 00:07:22.420 00:16:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:22.420 00:16:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.420 00:16:09 -- accel/accel.sh@17 -- # local accel_module 00:07:22.420 00:16:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:22.420 00:16:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:22.420 00:16:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.420 00:16:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.420 00:16:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.420 00:16:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.420 00:16:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.421 00:16:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.421 00:16:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.421 00:16:09 -- accel/accel.sh@42 -- # jq -r . 00:07:22.421 [2024-07-13 00:16:09.463963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:22.421 [2024-07-13 00:16:09.464104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70799 ] 00:07:22.421 [2024-07-13 00:16:09.605091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.679 [2024-07-13 00:16:09.687531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.060 00:16:10 -- accel/accel.sh@18 -- # out=' 00:07:24.060 SPDK Configuration: 00:07:24.060 Core mask: 0x1 00:07:24.060 00:07:24.060 Accel Perf Configuration: 00:07:24.060 Workload Type: xor 00:07:24.060 Source buffers: 3 00:07:24.060 Transfer size: 4096 bytes 00:07:24.060 Vector count 1 00:07:24.060 Module: software 00:07:24.060 Queue depth: 32 00:07:24.060 Allocate depth: 32 00:07:24.060 # threads/core: 1 00:07:24.060 Run time: 1 seconds 00:07:24.060 Verify: Yes 00:07:24.060 00:07:24.060 Running for 1 seconds... 00:07:24.060 00:07:24.060 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.060 ------------------------------------------------------------------------------------ 00:07:24.060 0,0 249216/s 973 MiB/s 0 0 00:07:24.060 ==================================================================================== 00:07:24.060 Total 249216/s 973 MiB/s 0 0' 00:07:24.060 00:16:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:24.060 00:16:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:24.060 00:16:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.060 00:16:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.060 00:16:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.060 00:16:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.060 00:16:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.060 00:16:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.060 00:16:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.060 00:16:10 -- accel/accel.sh@42 -- # jq -r . 00:07:24.060 [2024-07-13 00:16:10.910425] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:24.060 [2024-07-13 00:16:10.910641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70823 ] 00:07:24.060 [2024-07-13 00:16:11.042108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.060 [2024-07-13 00:16:11.125053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val= 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val= 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val=0x1 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val= 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val= 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val=xor 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val=3 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val= 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val=software 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val=32 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val=32 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val=1 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val=Yes 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val= 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:24.060 00:16:11 -- accel/accel.sh@21 -- # val= 00:07:24.060 00:16:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # IFS=: 00:07:24.060 00:16:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.434 00:16:12 -- accel/accel.sh@21 -- # val= 00:07:25.434 00:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.434 00:16:12 -- accel/accel.sh@20 -- # IFS=: 00:07:25.434 00:16:12 -- accel/accel.sh@20 -- # read -r var val 00:07:25.434 00:16:12 -- accel/accel.sh@21 -- # val= 00:07:25.434 00:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.434 00:16:12 -- accel/accel.sh@20 -- # IFS=: 00:07:25.434 00:16:12 -- accel/accel.sh@20 -- # read -r var val 00:07:25.434 00:16:12 -- accel/accel.sh@21 -- # val= 00:07:25.434 00:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.434 00:16:12 -- accel/accel.sh@20 -- # IFS=: 00:07:25.434 00:16:12 -- accel/accel.sh@20 -- # read -r var val 00:07:25.434 ************************************ 00:07:25.434 END TEST accel_xor 00:07:25.434 ************************************ 00:07:25.434 00:16:12 -- accel/accel.sh@21 -- # val= 00:07:25.434 00:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.434 00:16:12 -- accel/accel.sh@20 -- # IFS=: 00:07:25.434 00:16:12 -- accel/accel.sh@20 -- # read -r var val 00:07:25.434 00:16:12 -- accel/accel.sh@21 -- # val= 00:07:25.435 00:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.435 00:16:12 -- accel/accel.sh@20 -- # IFS=: 00:07:25.435 00:16:12 -- accel/accel.sh@20 -- # read -r var val 00:07:25.435 00:16:12 -- accel/accel.sh@21 -- # val= 00:07:25.435 00:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.435 00:16:12 -- accel/accel.sh@20 -- # IFS=: 00:07:25.435 00:16:12 -- accel/accel.sh@20 -- # read -r var val 00:07:25.435 00:16:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.435 00:16:12 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:25.435 00:16:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.435 00:07:25.435 real 0m2.889s 00:07:25.435 user 0m2.474s 00:07:25.435 sys 0m0.215s 00:07:25.435 00:16:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.435 00:16:12 -- common/autotest_common.sh@10 -- # set +x 00:07:25.435 00:16:12 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:25.435 00:16:12 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:25.435 00:16:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.435 00:16:12 -- common/autotest_common.sh@10 -- # set +x 00:07:25.435 ************************************ 00:07:25.435 START TEST accel_dif_verify 00:07:25.435 ************************************ 00:07:25.435 00:16:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:25.435 00:16:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.435 00:16:12 -- accel/accel.sh@17 -- # local accel_module 00:07:25.435 00:16:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:25.435 00:16:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:25.435 00:16:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.435 00:16:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.435 00:16:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.435 00:16:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.435 00:16:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.435 00:16:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.435 00:16:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.435 00:16:12 -- accel/accel.sh@42 -- # jq -r . 00:07:25.435 [2024-07-13 00:16:12.398820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:25.435 [2024-07-13 00:16:12.399050] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70853 ] 00:07:25.435 [2024-07-13 00:16:12.530861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.435 [2024-07-13 00:16:12.615550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.808 00:16:13 -- accel/accel.sh@18 -- # out=' 00:07:26.808 SPDK Configuration: 00:07:26.808 Core mask: 0x1 00:07:26.808 00:07:26.808 Accel Perf Configuration: 00:07:26.808 Workload Type: dif_verify 00:07:26.808 Vector size: 4096 bytes 00:07:26.808 Transfer size: 4096 bytes 00:07:26.808 Block size: 512 bytes 00:07:26.808 Metadata size: 8 bytes 00:07:26.808 Vector count 1 00:07:26.808 Module: software 00:07:26.808 Queue depth: 32 00:07:26.808 Allocate depth: 32 00:07:26.808 # threads/core: 1 00:07:26.808 Run time: 1 seconds 00:07:26.808 Verify: No 00:07:26.808 00:07:26.808 Running for 1 seconds... 00:07:26.808 00:07:26.809 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.809 ------------------------------------------------------------------------------------ 00:07:26.809 0,0 105728/s 419 MiB/s 0 0 00:07:26.809 ==================================================================================== 00:07:26.809 Total 105728/s 413 MiB/s 0 0' 00:07:26.809 00:16:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:26.809 00:16:13 -- accel/accel.sh@20 -- # IFS=: 00:07:26.809 00:16:13 -- accel/accel.sh@20 -- # read -r var val 00:07:26.809 00:16:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:26.809 00:16:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.809 00:16:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.809 00:16:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.809 00:16:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.809 00:16:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.809 00:16:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.809 00:16:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.809 00:16:13 -- accel/accel.sh@42 -- # jq -r . 00:07:26.809 [2024-07-13 00:16:13.842896] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:26.809 [2024-07-13 00:16:13.842993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70867 ] 00:07:26.809 [2024-07-13 00:16:13.980250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.067 [2024-07-13 00:16:14.070104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.067 00:16:14 -- accel/accel.sh@21 -- # val= 00:07:27.067 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.067 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.067 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.067 00:16:14 -- accel/accel.sh@21 -- # val= 00:07:27.067 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.067 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.067 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.067 00:16:14 -- accel/accel.sh@21 -- # val=0x1 00:07:27.067 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.067 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.067 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.067 00:16:14 -- accel/accel.sh@21 -- # val= 00:07:27.067 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.067 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.067 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.067 00:16:14 -- accel/accel.sh@21 -- # val= 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val=dif_verify 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val= 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val=software 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val=32 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val=32 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val=1 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val=No 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val= 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:27.068 00:16:14 -- accel/accel.sh@21 -- # val= 00:07:27.068 00:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # IFS=: 00:07:27.068 00:16:14 -- accel/accel.sh@20 -- # read -r var val 00:07:28.443 00:16:15 -- accel/accel.sh@21 -- # val= 00:07:28.443 00:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:28.443 00:16:15 -- accel/accel.sh@21 -- # val= 00:07:28.443 00:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:28.443 00:16:15 -- accel/accel.sh@21 -- # val= 00:07:28.443 00:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:28.443 ************************************ 00:07:28.443 END TEST accel_dif_verify 00:07:28.443 ************************************ 00:07:28.443 00:16:15 -- accel/accel.sh@21 -- # val= 00:07:28.443 00:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:28.443 00:16:15 -- accel/accel.sh@21 -- # val= 00:07:28.443 00:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:28.443 00:16:15 -- accel/accel.sh@21 -- # val= 00:07:28.443 00:16:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # IFS=: 00:07:28.443 00:16:15 -- accel/accel.sh@20 -- # read -r var val 00:07:28.443 00:16:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.443 00:16:15 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:28.443 00:16:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.443 00:07:28.443 real 0m2.893s 00:07:28.443 user 0m2.480s 00:07:28.443 sys 0m0.212s 00:07:28.443 00:16:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.443 00:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:28.443 00:16:15 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:28.443 00:16:15 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:28.443 00:16:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.443 00:16:15 -- common/autotest_common.sh@10 -- # set +x 00:07:28.443 ************************************ 00:07:28.443 START TEST accel_dif_generate 00:07:28.443 ************************************ 00:07:28.443 00:16:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:28.443 00:16:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.443 00:16:15 -- accel/accel.sh@17 -- # local accel_module 00:07:28.443 00:16:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:28.443 00:16:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:28.443 00:16:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.443 00:16:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.443 00:16:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.443 00:16:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.443 00:16:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.443 00:16:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.443 00:16:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.443 00:16:15 -- accel/accel.sh@42 -- # jq -r . 00:07:28.443 [2024-07-13 00:16:15.346363] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:28.443 [2024-07-13 00:16:15.346466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70907 ] 00:07:28.443 [2024-07-13 00:16:15.486686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.443 [2024-07-13 00:16:15.582781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.823 00:16:16 -- accel/accel.sh@18 -- # out=' 00:07:29.823 SPDK Configuration: 00:07:29.823 Core mask: 0x1 00:07:29.823 00:07:29.823 Accel Perf Configuration: 00:07:29.823 Workload Type: dif_generate 00:07:29.823 Vector size: 4096 bytes 00:07:29.823 Transfer size: 4096 bytes 00:07:29.823 Block size: 512 bytes 00:07:29.823 Metadata size: 8 bytes 00:07:29.823 Vector count 1 00:07:29.823 Module: software 00:07:29.823 Queue depth: 32 00:07:29.823 Allocate depth: 32 00:07:29.823 # threads/core: 1 00:07:29.823 Run time: 1 seconds 00:07:29.823 Verify: No 00:07:29.823 00:07:29.823 Running for 1 seconds... 00:07:29.823 00:07:29.823 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.823 ------------------------------------------------------------------------------------ 00:07:29.823 0,0 123904/s 491 MiB/s 0 0 00:07:29.823 ==================================================================================== 00:07:29.823 Total 123904/s 484 MiB/s 0 0' 00:07:29.823 00:16:16 -- accel/accel.sh@20 -- # IFS=: 00:07:29.823 00:16:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:29.823 00:16:16 -- accel/accel.sh@20 -- # read -r var val 00:07:29.823 00:16:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:29.823 00:16:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.823 00:16:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.823 00:16:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.823 00:16:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.823 00:16:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.823 00:16:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.823 00:16:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.823 00:16:16 -- accel/accel.sh@42 -- # jq -r . 00:07:29.823 [2024-07-13 00:16:16.820290] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:29.823 [2024-07-13 00:16:16.820368] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70921 ] 00:07:29.823 [2024-07-13 00:16:16.953089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.823 [2024-07-13 00:16:17.040967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val= 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val= 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val=0x1 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val= 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val= 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val=dif_generate 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val= 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val=software 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val=32 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val=32 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val=1 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val=No 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val= 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:30.082 00:16:17 -- accel/accel.sh@21 -- # val= 00:07:30.082 00:16:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # IFS=: 00:07:30.082 00:16:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.016 00:16:18 -- accel/accel.sh@21 -- # val= 00:07:31.016 00:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.016 00:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:31.016 00:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:31.016 00:16:18 -- accel/accel.sh@21 -- # val= 00:07:31.016 00:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.016 00:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:31.016 00:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:31.016 00:16:18 -- accel/accel.sh@21 -- # val= 00:07:31.016 00:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.016 00:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:31.016 00:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:31.016 00:16:18 -- accel/accel.sh@21 -- # val= 00:07:31.016 00:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.016 00:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:31.016 00:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:31.016 00:16:18 -- accel/accel.sh@21 -- # val= 00:07:31.016 00:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.016 00:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:31.016 00:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:31.016 00:16:18 -- accel/accel.sh@21 -- # val= 00:07:31.274 00:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.274 00:16:18 -- accel/accel.sh@20 -- # IFS=: 00:07:31.274 00:16:18 -- accel/accel.sh@20 -- # read -r var val 00:07:31.274 00:16:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.274 00:16:18 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:31.274 ************************************ 00:07:31.274 END TEST accel_dif_generate 00:07:31.274 ************************************ 00:07:31.274 00:16:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.274 00:07:31.274 real 0m2.925s 00:07:31.274 user 0m2.518s 00:07:31.274 sys 0m0.204s 00:07:31.274 00:16:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.274 00:16:18 -- common/autotest_common.sh@10 -- # set +x 00:07:31.274 00:16:18 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:31.274 00:16:18 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:31.274 00:16:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.274 00:16:18 -- common/autotest_common.sh@10 -- # set +x 00:07:31.274 ************************************ 00:07:31.274 START TEST accel_dif_generate_copy 00:07:31.274 ************************************ 00:07:31.274 00:16:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:31.274 00:16:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.274 00:16:18 -- accel/accel.sh@17 -- # local accel_module 00:07:31.274 00:16:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:31.274 00:16:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:31.274 00:16:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.274 00:16:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.274 00:16:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.274 00:16:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.274 00:16:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.274 00:16:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.274 00:16:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.274 00:16:18 -- accel/accel.sh@42 -- # jq -r . 00:07:31.274 [2024-07-13 00:16:18.316235] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:31.274 [2024-07-13 00:16:18.316342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70961 ] 00:07:31.274 [2024-07-13 00:16:18.454748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.532 [2024-07-13 00:16:18.550289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.903 00:16:19 -- accel/accel.sh@18 -- # out=' 00:07:32.903 SPDK Configuration: 00:07:32.903 Core mask: 0x1 00:07:32.903 00:07:32.903 Accel Perf Configuration: 00:07:32.903 Workload Type: dif_generate_copy 00:07:32.903 Vector size: 4096 bytes 00:07:32.903 Transfer size: 4096 bytes 00:07:32.903 Vector count 1 00:07:32.903 Module: software 00:07:32.903 Queue depth: 32 00:07:32.903 Allocate depth: 32 00:07:32.903 # threads/core: 1 00:07:32.903 Run time: 1 seconds 00:07:32.903 Verify: No 00:07:32.903 00:07:32.903 Running for 1 seconds... 00:07:32.903 00:07:32.903 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.903 ------------------------------------------------------------------------------------ 00:07:32.903 0,0 99328/s 394 MiB/s 0 0 00:07:32.904 ==================================================================================== 00:07:32.904 Total 99328/s 388 MiB/s 0 0' 00:07:32.904 00:16:19 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:32.904 00:16:19 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:32.904 00:16:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.904 00:16:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.904 00:16:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.904 00:16:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.904 00:16:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.904 00:16:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.904 00:16:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.904 00:16:19 -- accel/accel.sh@42 -- # jq -r . 00:07:32.904 [2024-07-13 00:16:19.773910] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:32.904 [2024-07-13 00:16:19.774038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70975 ] 00:07:32.904 [2024-07-13 00:16:19.912725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.904 [2024-07-13 00:16:20.003646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val= 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val= 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val=0x1 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val= 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val= 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val= 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val=software 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val=32 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val=32 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val=1 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val=No 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val= 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:32.904 00:16:20 -- accel/accel.sh@21 -- # val= 00:07:32.904 00:16:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:32.904 00:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:34.275 00:16:21 -- accel/accel.sh@21 -- # val= 00:07:34.275 00:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.275 00:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:34.276 00:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:34.276 00:16:21 -- accel/accel.sh@21 -- # val= 00:07:34.276 00:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.276 00:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:34.276 00:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:34.276 00:16:21 -- accel/accel.sh@21 -- # val= 00:07:34.276 00:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.276 00:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:34.276 00:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:34.276 00:16:21 -- accel/accel.sh@21 -- # val= 00:07:34.276 00:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.276 00:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:34.276 00:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:34.276 00:16:21 -- accel/accel.sh@21 -- # val= 00:07:34.276 00:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.276 00:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:34.276 00:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:34.276 00:16:21 -- accel/accel.sh@21 -- # val= 00:07:34.276 00:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.276 00:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:34.276 00:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:34.276 00:16:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.276 00:16:21 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:34.276 00:16:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.276 00:07:34.276 real 0m2.917s 00:07:34.276 user 0m2.484s 00:07:34.276 sys 0m0.230s 00:07:34.276 00:16:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.276 ************************************ 00:07:34.276 00:16:21 -- common/autotest_common.sh@10 -- # set +x 00:07:34.276 END TEST accel_dif_generate_copy 00:07:34.276 ************************************ 00:07:34.276 00:16:21 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:34.276 00:16:21 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.276 00:16:21 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:34.276 00:16:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.276 00:16:21 -- common/autotest_common.sh@10 -- # set +x 00:07:34.276 ************************************ 00:07:34.276 START TEST accel_comp 00:07:34.276 ************************************ 00:07:34.276 00:16:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.276 00:16:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.276 00:16:21 -- accel/accel.sh@17 -- # local accel_module 00:07:34.276 00:16:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.276 00:16:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.276 00:16:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.276 00:16:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.276 00:16:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.276 00:16:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.276 00:16:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.276 00:16:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.276 00:16:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.276 00:16:21 -- accel/accel.sh@42 -- # jq -r . 00:07:34.276 [2024-07-13 00:16:21.284188] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:34.276 [2024-07-13 00:16:21.284296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71010 ] 00:07:34.276 [2024-07-13 00:16:21.421844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.533 [2024-07-13 00:16:21.510558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.903 00:16:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.903 00:07:35.903 SPDK Configuration: 00:07:35.903 Core mask: 0x1 00:07:35.903 00:07:35.903 Accel Perf Configuration: 00:07:35.903 Workload Type: compress 00:07:35.903 Transfer size: 4096 bytes 00:07:35.903 Vector count 1 00:07:35.903 Module: software 00:07:35.903 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.903 Queue depth: 32 00:07:35.903 Allocate depth: 32 00:07:35.903 # threads/core: 1 00:07:35.903 Run time: 1 seconds 00:07:35.903 Verify: No 00:07:35.903 00:07:35.903 Running for 1 seconds... 00:07:35.903 00:07:35.903 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.903 ------------------------------------------------------------------------------------ 00:07:35.903 0,0 51168/s 213 MiB/s 0 0 00:07:35.903 ==================================================================================== 00:07:35.903 Total 51168/s 199 MiB/s 0 0' 00:07:35.903 00:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.903 00:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.903 00:16:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.903 00:16:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.903 00:16:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.903 00:16:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.903 00:16:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.903 00:16:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.903 00:16:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.903 00:16:22 -- accel/accel.sh@42 -- # jq -r . 00:07:35.903 [2024-07-13 00:16:22.747721] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:35.903 [2024-07-13 00:16:22.747834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71029 ] 00:07:35.903 [2024-07-13 00:16:22.889381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.903 [2024-07-13 00:16:22.973026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val= 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val= 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val= 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val=0x1 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val= 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val= 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val=compress 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val= 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val=software 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val=32 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val=32 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val=1 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val=No 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val= 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:35.903 00:16:23 -- accel/accel.sh@21 -- # val= 00:07:35.903 00:16:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # IFS=: 00:07:35.903 00:16:23 -- accel/accel.sh@20 -- # read -r var val 00:07:37.271 00:16:24 -- accel/accel.sh@21 -- # val= 00:07:37.271 00:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:37.271 00:16:24 -- accel/accel.sh@21 -- # val= 00:07:37.271 00:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:37.271 00:16:24 -- accel/accel.sh@21 -- # val= 00:07:37.271 00:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:37.271 00:16:24 -- accel/accel.sh@21 -- # val= 00:07:37.271 00:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:37.271 00:16:24 -- accel/accel.sh@21 -- # val= 00:07:37.271 00:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:37.271 00:16:24 -- accel/accel.sh@21 -- # val= 00:07:37.271 00:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:37.271 00:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:37.271 00:16:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.271 00:16:24 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:37.271 00:16:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.271 00:07:37.271 real 0m2.923s 00:07:37.271 user 0m2.498s 00:07:37.271 sys 0m0.219s 00:07:37.271 00:16:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.271 00:16:24 -- common/autotest_common.sh@10 -- # set +x 00:07:37.271 ************************************ 00:07:37.271 END TEST accel_comp 00:07:37.271 ************************************ 00:07:37.271 00:16:24 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.271 00:16:24 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:37.271 00:16:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.271 00:16:24 -- common/autotest_common.sh@10 -- # set +x 00:07:37.271 ************************************ 00:07:37.271 START TEST accel_decomp 00:07:37.271 ************************************ 00:07:37.271 00:16:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.271 00:16:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.271 00:16:24 -- accel/accel.sh@17 -- # local accel_module 00:07:37.271 00:16:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.271 00:16:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.271 00:16:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.271 00:16:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.271 00:16:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.271 00:16:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.271 00:16:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.271 00:16:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.271 00:16:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.271 00:16:24 -- accel/accel.sh@42 -- # jq -r . 00:07:37.271 [2024-07-13 00:16:24.258927] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:37.271 [2024-07-13 00:16:24.259020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71064 ] 00:07:37.271 [2024-07-13 00:16:24.400131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.529 [2024-07-13 00:16:24.502452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.904 00:16:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.904 00:07:38.904 SPDK Configuration: 00:07:38.904 Core mask: 0x1 00:07:38.904 00:07:38.904 Accel Perf Configuration: 00:07:38.904 Workload Type: decompress 00:07:38.904 Transfer size: 4096 bytes 00:07:38.904 Vector count 1 00:07:38.904 Module: software 00:07:38.904 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.904 Queue depth: 32 00:07:38.904 Allocate depth: 32 00:07:38.904 # threads/core: 1 00:07:38.904 Run time: 1 seconds 00:07:38.904 Verify: Yes 00:07:38.904 00:07:38.904 Running for 1 seconds... 00:07:38.904 00:07:38.904 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.904 ------------------------------------------------------------------------------------ 00:07:38.904 0,0 69344/s 127 MiB/s 0 0 00:07:38.904 ==================================================================================== 00:07:38.904 Total 69344/s 270 MiB/s 0 0' 00:07:38.904 00:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:38.904 00:16:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:38.904 00:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:38.904 00:16:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.904 00:16:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:38.904 00:16:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.904 00:16:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.904 00:16:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.904 00:16:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.904 00:16:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.904 00:16:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.904 00:16:25 -- accel/accel.sh@42 -- # jq -r . 00:07:38.904 [2024-07-13 00:16:25.737737] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:38.904 [2024-07-13 00:16:25.737832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71083 ] 00:07:38.904 [2024-07-13 00:16:25.877097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.904 [2024-07-13 00:16:25.968869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.904 00:16:26 -- accel/accel.sh@21 -- # val= 00:07:38.904 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.904 00:16:26 -- accel/accel.sh@21 -- # val= 00:07:38.904 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.904 00:16:26 -- accel/accel.sh@21 -- # val= 00:07:38.904 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.904 00:16:26 -- accel/accel.sh@21 -- # val=0x1 00:07:38.904 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.904 00:16:26 -- accel/accel.sh@21 -- # val= 00:07:38.904 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.904 00:16:26 -- accel/accel.sh@21 -- # val= 00:07:38.904 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.904 00:16:26 -- accel/accel.sh@21 -- # val=decompress 00:07:38.904 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.904 00:16:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.904 00:16:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.904 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.904 00:16:26 -- accel/accel.sh@21 -- # val= 00:07:38.904 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.904 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.905 00:16:26 -- accel/accel.sh@21 -- # val=software 00:07:38.905 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.905 00:16:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.905 00:16:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.905 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.905 00:16:26 -- accel/accel.sh@21 -- # val=32 00:07:38.905 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.905 00:16:26 -- accel/accel.sh@21 -- # val=32 00:07:38.905 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.905 00:16:26 -- accel/accel.sh@21 -- # val=1 00:07:38.905 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.905 00:16:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.905 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.905 00:16:26 -- accel/accel.sh@21 -- # val=Yes 00:07:38.905 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.905 00:16:26 -- accel/accel.sh@21 -- # val= 00:07:38.905 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:38.905 00:16:26 -- accel/accel.sh@21 -- # val= 00:07:38.905 00:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:38.905 00:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:40.279 00:16:27 -- accel/accel.sh@21 -- # val= 00:07:40.279 00:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.279 00:16:27 -- accel/accel.sh@21 -- # val= 00:07:40.279 00:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.279 00:16:27 -- accel/accel.sh@21 -- # val= 00:07:40.279 00:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.279 00:16:27 -- accel/accel.sh@21 -- # val= 00:07:40.279 00:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.279 00:16:27 -- accel/accel.sh@21 -- # val= 00:07:40.279 00:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.279 ************************************ 00:07:40.279 END TEST accel_decomp 00:07:40.279 ************************************ 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.279 00:16:27 -- accel/accel.sh@21 -- # val= 00:07:40.279 00:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # IFS=: 00:07:40.279 00:16:27 -- accel/accel.sh@20 -- # read -r var val 00:07:40.279 00:16:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.279 00:16:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.279 00:16:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.279 00:07:40.279 real 0m2.960s 00:07:40.279 user 0m2.520s 00:07:40.279 sys 0m0.234s 00:07:40.279 00:16:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.279 00:16:27 -- common/autotest_common.sh@10 -- # set +x 00:07:40.279 00:16:27 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.279 00:16:27 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:40.279 00:16:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.279 00:16:27 -- common/autotest_common.sh@10 -- # set +x 00:07:40.279 ************************************ 00:07:40.279 START TEST accel_decmop_full 00:07:40.279 ************************************ 00:07:40.279 00:16:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.279 00:16:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.279 00:16:27 -- accel/accel.sh@17 -- # local accel_module 00:07:40.279 00:16:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.279 00:16:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.279 00:16:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.279 00:16:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.279 00:16:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.279 00:16:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.279 00:16:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.279 00:16:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.279 00:16:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.279 00:16:27 -- accel/accel.sh@42 -- # jq -r . 00:07:40.279 [2024-07-13 00:16:27.278440] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:40.279 [2024-07-13 00:16:27.278533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71118 ] 00:07:40.279 [2024-07-13 00:16:27.409959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.554 [2024-07-13 00:16:27.510877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.928 00:16:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.928 00:07:41.928 SPDK Configuration: 00:07:41.928 Core mask: 0x1 00:07:41.928 00:07:41.928 Accel Perf Configuration: 00:07:41.928 Workload Type: decompress 00:07:41.929 Transfer size: 111250 bytes 00:07:41.929 Vector count 1 00:07:41.929 Module: software 00:07:41.929 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.929 Queue depth: 32 00:07:41.929 Allocate depth: 32 00:07:41.929 # threads/core: 1 00:07:41.929 Run time: 1 seconds 00:07:41.929 Verify: Yes 00:07:41.929 00:07:41.929 Running for 1 seconds... 00:07:41.929 00:07:41.929 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.929 ------------------------------------------------------------------------------------ 00:07:41.929 0,0 4736/s 195 MiB/s 0 0 00:07:41.929 ==================================================================================== 00:07:41.929 Total 4736/s 502 MiB/s 0 0' 00:07:41.929 00:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:41.929 00:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:41.929 00:16:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.929 00:16:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.929 00:16:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.929 00:16:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.929 00:16:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.929 00:16:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.929 00:16:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.929 00:16:28 -- accel/accel.sh@42 -- # jq -r . 00:07:41.929 [2024-07-13 00:16:28.752281] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:41.929 [2024-07-13 00:16:28.752372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71132 ] 00:07:41.929 [2024-07-13 00:16:28.889867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.929 [2024-07-13 00:16:28.987761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val= 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val= 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val= 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val=0x1 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val= 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val= 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val=decompress 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val= 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val=software 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val=32 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val=32 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val=1 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val=Yes 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val= 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:41.929 00:16:29 -- accel/accel.sh@21 -- # val= 00:07:41.929 00:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:41.929 00:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:43.303 00:16:30 -- accel/accel.sh@21 -- # val= 00:07:43.303 00:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.303 00:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.303 00:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.303 00:16:30 -- accel/accel.sh@21 -- # val= 00:07:43.303 00:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.303 00:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.303 00:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.303 00:16:30 -- accel/accel.sh@21 -- # val= 00:07:43.303 00:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.303 00:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.303 00:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.303 00:16:30 -- accel/accel.sh@21 -- # val= 00:07:43.303 00:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.303 00:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.303 00:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.304 00:16:30 -- accel/accel.sh@21 -- # val= 00:07:43.304 00:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.304 00:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.304 00:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.304 00:16:30 -- accel/accel.sh@21 -- # val= 00:07:43.304 00:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.304 00:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:43.304 00:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:43.304 00:16:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.304 00:16:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:43.304 00:16:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.304 00:07:43.304 real 0m2.962s 00:07:43.304 user 0m2.534s 00:07:43.304 sys 0m0.223s 00:07:43.304 00:16:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.304 00:16:30 -- common/autotest_common.sh@10 -- # set +x 00:07:43.304 ************************************ 00:07:43.304 END TEST accel_decmop_full 00:07:43.304 ************************************ 00:07:43.304 00:16:30 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.304 00:16:30 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:43.304 00:16:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.304 00:16:30 -- common/autotest_common.sh@10 -- # set +x 00:07:43.304 ************************************ 00:07:43.304 START TEST accel_decomp_mcore 00:07:43.304 ************************************ 00:07:43.304 00:16:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.304 00:16:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.304 00:16:30 -- accel/accel.sh@17 -- # local accel_module 00:07:43.304 00:16:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.304 00:16:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.304 00:16:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.304 00:16:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.304 00:16:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.304 00:16:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.304 00:16:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.304 00:16:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.304 00:16:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.304 00:16:30 -- accel/accel.sh@42 -- # jq -r . 00:07:43.304 [2024-07-13 00:16:30.301038] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:43.304 [2024-07-13 00:16:30.301145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71172 ] 00:07:43.304 [2024-07-13 00:16:30.438920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.562 [2024-07-13 00:16:30.533292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.562 [2024-07-13 00:16:30.533438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.562 [2024-07-13 00:16:30.534025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.562 [2024-07-13 00:16:30.534036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.939 00:16:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:44.939 00:07:44.939 SPDK Configuration: 00:07:44.939 Core mask: 0xf 00:07:44.939 00:07:44.939 Accel Perf Configuration: 00:07:44.939 Workload Type: decompress 00:07:44.939 Transfer size: 4096 bytes 00:07:44.939 Vector count 1 00:07:44.939 Module: software 00:07:44.939 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.939 Queue depth: 32 00:07:44.939 Allocate depth: 32 00:07:44.939 # threads/core: 1 00:07:44.939 Run time: 1 seconds 00:07:44.939 Verify: Yes 00:07:44.939 00:07:44.939 Running for 1 seconds... 00:07:44.939 00:07:44.939 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.939 ------------------------------------------------------------------------------------ 00:07:44.939 0,0 60928/s 112 MiB/s 0 0 00:07:44.939 3,0 59488/s 109 MiB/s 0 0 00:07:44.939 2,0 59072/s 108 MiB/s 0 0 00:07:44.939 1,0 58720/s 108 MiB/s 0 0 00:07:44.939 ==================================================================================== 00:07:44.939 Total 238208/s 930 MiB/s 0 0' 00:07:44.939 00:16:31 -- accel/accel.sh@20 -- # IFS=: 00:07:44.939 00:16:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.939 00:16:31 -- accel/accel.sh@20 -- # read -r var val 00:07:44.939 00:16:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.939 00:16:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.939 00:16:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.939 00:16:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.939 00:16:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.939 00:16:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.939 00:16:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.939 00:16:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.939 00:16:31 -- accel/accel.sh@42 -- # jq -r . 00:07:44.939 [2024-07-13 00:16:31.767495] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:44.939 [2024-07-13 00:16:31.767831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71189 ] 00:07:44.939 [2024-07-13 00:16:31.908780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.939 [2024-07-13 00:16:32.005691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.940 [2024-07-13 00:16:32.005757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.940 [2024-07-13 00:16:32.005891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.940 [2024-07-13 00:16:32.005894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val= 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val= 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val= 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val=0xf 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val= 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val= 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val=decompress 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val= 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val=software 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val=32 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val=32 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val=1 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val=Yes 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val= 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:44.940 00:16:32 -- accel/accel.sh@21 -- # val= 00:07:44.940 00:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:44.940 00:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:46.320 00:16:33 -- accel/accel.sh@21 -- # val= 00:07:46.320 00:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.320 00:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.320 00:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.320 00:16:33 -- accel/accel.sh@21 -- # val= 00:07:46.320 00:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.320 00:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.320 00:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.320 00:16:33 -- accel/accel.sh@21 -- # val= 00:07:46.320 00:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.320 00:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.320 00:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.320 00:16:33 -- accel/accel.sh@21 -- # val= 00:07:46.320 00:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.320 00:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.320 00:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.320 00:16:33 -- accel/accel.sh@21 -- # val= 00:07:46.320 00:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.320 00:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.320 00:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.320 00:16:33 -- accel/accel.sh@21 -- # val= 00:07:46.320 00:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.321 00:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.321 00:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.321 00:16:33 -- accel/accel.sh@21 -- # val= 00:07:46.321 00:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.321 00:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.321 00:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.321 00:16:33 -- accel/accel.sh@21 -- # val= 00:07:46.321 00:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.321 00:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.321 00:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.321 00:16:33 -- accel/accel.sh@21 -- # val= 00:07:46.321 00:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.321 00:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:46.321 00:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:46.321 00:16:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.321 00:16:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:46.321 00:16:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.321 00:07:46.321 real 0m2.958s 00:07:46.321 user 0m9.304s 00:07:46.321 sys 0m0.253s 00:07:46.321 00:16:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.321 00:16:33 -- common/autotest_common.sh@10 -- # set +x 00:07:46.321 ************************************ 00:07:46.321 END TEST accel_decomp_mcore 00:07:46.321 ************************************ 00:07:46.321 00:16:33 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.321 00:16:33 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:46.321 00:16:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.321 00:16:33 -- common/autotest_common.sh@10 -- # set +x 00:07:46.321 ************************************ 00:07:46.321 START TEST accel_decomp_full_mcore 00:07:46.321 ************************************ 00:07:46.321 00:16:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.321 00:16:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.321 00:16:33 -- accel/accel.sh@17 -- # local accel_module 00:07:46.321 00:16:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.321 00:16:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.321 00:16:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.321 00:16:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.321 00:16:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.321 00:16:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.321 00:16:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.321 00:16:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.321 00:16:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.321 00:16:33 -- accel/accel.sh@42 -- # jq -r . 00:07:46.321 [2024-07-13 00:16:33.310537] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:46.321 [2024-07-13 00:16:33.311190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71232 ] 00:07:46.321 [2024-07-13 00:16:33.445967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.321 [2024-07-13 00:16:33.542708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.321 [2024-07-13 00:16:33.542827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.321 [2024-07-13 00:16:33.542971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.321 [2024-07-13 00:16:33.542974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.694 00:16:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:47.694 00:07:47.694 SPDK Configuration: 00:07:47.694 Core mask: 0xf 00:07:47.694 00:07:47.694 Accel Perf Configuration: 00:07:47.694 Workload Type: decompress 00:07:47.694 Transfer size: 111250 bytes 00:07:47.694 Vector count 1 00:07:47.694 Module: software 00:07:47.694 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.694 Queue depth: 32 00:07:47.694 Allocate depth: 32 00:07:47.694 # threads/core: 1 00:07:47.694 Run time: 1 seconds 00:07:47.694 Verify: Yes 00:07:47.694 00:07:47.694 Running for 1 seconds... 00:07:47.694 00:07:47.694 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.694 ------------------------------------------------------------------------------------ 00:07:47.694 0,0 4512/s 186 MiB/s 0 0 00:07:47.694 3,0 4544/s 187 MiB/s 0 0 00:07:47.694 2,0 4544/s 187 MiB/s 0 0 00:07:47.694 1,0 4512/s 186 MiB/s 0 0 00:07:47.694 ==================================================================================== 00:07:47.694 Total 18112/s 1921 MiB/s 0 0' 00:07:47.694 00:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:47.694 00:16:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.694 00:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:47.694 00:16:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.694 00:16:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.694 00:16:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.694 00:16:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.694 00:16:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.694 00:16:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.694 00:16:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.694 00:16:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.694 00:16:34 -- accel/accel.sh@42 -- # jq -r . 00:07:47.694 [2024-07-13 00:16:34.795820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:47.694 [2024-07-13 00:16:34.795946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71249 ] 00:07:47.952 [2024-07-13 00:16:34.928378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.952 [2024-07-13 00:16:35.018747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.952 [2024-07-13 00:16:35.018841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.952 [2024-07-13 00:16:35.018980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.952 [2024-07-13 00:16:35.018983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val= 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val= 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val= 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val=0xf 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val= 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val= 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val=decompress 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val= 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val=software 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val=32 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val=32 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val=1 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val=Yes 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val= 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:47.952 00:16:35 -- accel/accel.sh@21 -- # val= 00:07:47.952 00:16:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # IFS=: 00:07:47.952 00:16:35 -- accel/accel.sh@20 -- # read -r var val 00:07:49.353 00:16:36 -- accel/accel.sh@21 -- # val= 00:07:49.353 00:16:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # IFS=: 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # read -r var val 00:07:49.353 00:16:36 -- accel/accel.sh@21 -- # val= 00:07:49.353 00:16:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # IFS=: 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # read -r var val 00:07:49.353 00:16:36 -- accel/accel.sh@21 -- # val= 00:07:49.353 00:16:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # IFS=: 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # read -r var val 00:07:49.353 00:16:36 -- accel/accel.sh@21 -- # val= 00:07:49.353 00:16:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # IFS=: 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # read -r var val 00:07:49.353 00:16:36 -- accel/accel.sh@21 -- # val= 00:07:49.353 00:16:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # IFS=: 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # read -r var val 00:07:49.353 00:16:36 -- accel/accel.sh@21 -- # val= 00:07:49.353 00:16:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # IFS=: 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # read -r var val 00:07:49.353 00:16:36 -- accel/accel.sh@21 -- # val= 00:07:49.353 00:16:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # IFS=: 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # read -r var val 00:07:49.353 00:16:36 -- accel/accel.sh@21 -- # val= 00:07:49.353 00:16:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # IFS=: 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # read -r var val 00:07:49.353 00:16:36 -- accel/accel.sh@21 -- # val= 00:07:49.353 00:16:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # IFS=: 00:07:49.353 00:16:36 -- accel/accel.sh@20 -- # read -r var val 00:07:49.353 00:16:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:49.353 00:16:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:49.353 00:16:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.353 00:07:49.353 real 0m2.993s 00:07:49.353 user 0m9.427s 00:07:49.353 sys 0m0.237s 00:07:49.353 00:16:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.353 00:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:49.353 ************************************ 00:07:49.353 END TEST accel_decomp_full_mcore 00:07:49.353 ************************************ 00:07:49.353 00:16:36 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:49.353 00:16:36 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:49.353 00:16:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.353 00:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:49.353 ************************************ 00:07:49.353 START TEST accel_decomp_mthread 00:07:49.353 ************************************ 00:07:49.353 00:16:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:49.353 00:16:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.353 00:16:36 -- accel/accel.sh@17 -- # local accel_module 00:07:49.353 00:16:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:49.353 00:16:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:49.353 00:16:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.353 00:16:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.353 00:16:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.353 00:16:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.353 00:16:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.353 00:16:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.353 00:16:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.353 00:16:36 -- accel/accel.sh@42 -- # jq -r . 00:07:49.353 [2024-07-13 00:16:36.351205] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:49.353 [2024-07-13 00:16:36.351757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71292 ] 00:07:49.353 [2024-07-13 00:16:36.484776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.354 [2024-07-13 00:16:36.580342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.729 00:16:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:50.729 00:07:50.729 SPDK Configuration: 00:07:50.729 Core mask: 0x1 00:07:50.729 00:07:50.729 Accel Perf Configuration: 00:07:50.729 Workload Type: decompress 00:07:50.729 Transfer size: 4096 bytes 00:07:50.729 Vector count 1 00:07:50.729 Module: software 00:07:50.729 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.729 Queue depth: 32 00:07:50.729 Allocate depth: 32 00:07:50.729 # threads/core: 2 00:07:50.729 Run time: 1 seconds 00:07:50.729 Verify: Yes 00:07:50.729 00:07:50.729 Running for 1 seconds... 00:07:50.729 00:07:50.729 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.729 ------------------------------------------------------------------------------------ 00:07:50.729 0,1 34304/s 63 MiB/s 0 0 00:07:50.729 0,0 34176/s 62 MiB/s 0 0 00:07:50.729 ==================================================================================== 00:07:50.729 Total 68480/s 267 MiB/s 0 0' 00:07:50.729 00:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:50.729 00:16:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:50.729 00:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:50.729 00:16:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:50.729 00:16:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.729 00:16:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.729 00:16:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.729 00:16:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.729 00:16:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.729 00:16:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.729 00:16:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.729 00:16:37 -- accel/accel.sh@42 -- # jq -r . 00:07:50.729 [2024-07-13 00:16:37.841588] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:50.729 [2024-07-13 00:16:37.841759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71306 ] 00:07:50.988 [2024-07-13 00:16:37.985500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.988 [2024-07-13 00:16:38.084714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val= 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val= 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val= 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val=0x1 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val= 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val= 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val=decompress 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val= 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val=software 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val=32 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val=32 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val=2 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val=Yes 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val= 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:50.988 00:16:38 -- accel/accel.sh@21 -- # val= 00:07:50.988 00:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:50.988 00:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:52.363 00:16:39 -- accel/accel.sh@21 -- # val= 00:07:52.363 00:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:52.363 00:16:39 -- accel/accel.sh@21 -- # val= 00:07:52.363 00:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:52.363 00:16:39 -- accel/accel.sh@21 -- # val= 00:07:52.363 00:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:52.363 00:16:39 -- accel/accel.sh@21 -- # val= 00:07:52.363 00:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:52.363 00:16:39 -- accel/accel.sh@21 -- # val= 00:07:52.363 00:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:52.363 00:16:39 -- accel/accel.sh@21 -- # val= 00:07:52.363 00:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:52.363 00:16:39 -- accel/accel.sh@21 -- # val= 00:07:52.363 00:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:52.363 00:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:52.363 00:16:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:52.363 00:16:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:52.363 00:16:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.363 00:07:52.363 real 0m2.977s 00:07:52.363 user 0m2.544s 00:07:52.363 sys 0m0.230s 00:07:52.363 00:16:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.363 00:16:39 -- common/autotest_common.sh@10 -- # set +x 00:07:52.363 ************************************ 00:07:52.363 END TEST accel_decomp_mthread 00:07:52.363 ************************************ 00:07:52.363 00:16:39 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.363 00:16:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:52.363 00:16:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.363 00:16:39 -- common/autotest_common.sh@10 -- # set +x 00:07:52.363 ************************************ 00:07:52.363 START TEST accel_deomp_full_mthread 00:07:52.363 ************************************ 00:07:52.363 00:16:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.363 00:16:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.363 00:16:39 -- accel/accel.sh@17 -- # local accel_module 00:07:52.363 00:16:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.363 00:16:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.363 00:16:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.363 00:16:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.363 00:16:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.363 00:16:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.363 00:16:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.363 00:16:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.363 00:16:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.363 00:16:39 -- accel/accel.sh@42 -- # jq -r . 00:07:52.363 [2024-07-13 00:16:39.379833] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:52.363 [2024-07-13 00:16:39.379993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71346 ] 00:07:52.363 [2024-07-13 00:16:39.517636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.621 [2024-07-13 00:16:39.619481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.995 00:16:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:53.995 00:07:53.995 SPDK Configuration: 00:07:53.995 Core mask: 0x1 00:07:53.995 00:07:53.995 Accel Perf Configuration: 00:07:53.995 Workload Type: decompress 00:07:53.995 Transfer size: 111250 bytes 00:07:53.995 Vector count 1 00:07:53.995 Module: software 00:07:53.995 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:53.995 Queue depth: 32 00:07:53.995 Allocate depth: 32 00:07:53.995 # threads/core: 2 00:07:53.995 Run time: 1 seconds 00:07:53.995 Verify: Yes 00:07:53.995 00:07:53.995 Running for 1 seconds... 00:07:53.995 00:07:53.995 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.995 ------------------------------------------------------------------------------------ 00:07:53.995 0,1 2400/s 99 MiB/s 0 0 00:07:53.995 0,0 2368/s 97 MiB/s 0 0 00:07:53.995 ==================================================================================== 00:07:53.995 Total 4768/s 505 MiB/s 0 0' 00:07:53.995 00:16:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:53.995 00:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:53.995 00:16:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.995 00:16:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.995 00:16:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.995 00:16:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.995 00:16:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.995 00:16:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.995 00:16:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.995 00:16:40 -- accel/accel.sh@42 -- # jq -r . 00:07:53.995 [2024-07-13 00:16:40.873568] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:53.995 [2024-07-13 00:16:40.873672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71360 ] 00:07:53.995 [2024-07-13 00:16:41.005062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.995 [2024-07-13 00:16:41.104730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val= 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val= 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val= 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val=0x1 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val= 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val= 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val=decompress 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val= 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val=software 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val=32 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val=32 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val=2 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val=Yes 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val= 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:53.995 00:16:41 -- accel/accel.sh@21 -- # val= 00:07:53.995 00:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:53.995 00:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:55.392 00:16:42 -- accel/accel.sh@21 -- # val= 00:07:55.392 00:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:55.392 00:16:42 -- accel/accel.sh@21 -- # val= 00:07:55.392 00:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:55.392 00:16:42 -- accel/accel.sh@21 -- # val= 00:07:55.392 00:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:55.392 00:16:42 -- accel/accel.sh@21 -- # val= 00:07:55.392 00:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:55.392 00:16:42 -- accel/accel.sh@21 -- # val= 00:07:55.392 00:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:55.392 00:16:42 -- accel/accel.sh@21 -- # val= 00:07:55.392 00:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:55.392 00:16:42 -- accel/accel.sh@21 -- # val= 00:07:55.392 00:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:55.392 00:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:55.392 00:16:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:55.392 00:16:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:55.392 00:16:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.392 00:07:55.392 real 0m2.991s 00:07:55.392 user 0m2.560s 00:07:55.392 sys 0m0.226s 00:07:55.392 00:16:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.392 00:16:42 -- common/autotest_common.sh@10 -- # set +x 00:07:55.392 ************************************ 00:07:55.392 END TEST accel_deomp_full_mthread 00:07:55.392 ************************************ 00:07:55.392 00:16:42 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:55.392 00:16:42 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:55.392 00:16:42 -- accel/accel.sh@129 -- # build_accel_config 00:07:55.392 00:16:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:55.392 00:16:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.392 00:16:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.392 00:16:42 -- common/autotest_common.sh@10 -- # set +x 00:07:55.392 00:16:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.392 00:16:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.392 00:16:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.392 00:16:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.392 00:16:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.392 00:16:42 -- accel/accel.sh@42 -- # jq -r . 00:07:55.392 ************************************ 00:07:55.392 START TEST accel_dif_functional_tests 00:07:55.392 ************************************ 00:07:55.392 00:16:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:55.392 [2024-07-13 00:16:42.465379] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:55.392 [2024-07-13 00:16:42.465547] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71396 ] 00:07:55.392 [2024-07-13 00:16:42.609245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.650 [2024-07-13 00:16:42.711037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.650 [2024-07-13 00:16:42.711174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.650 [2024-07-13 00:16:42.711179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.650 00:07:55.650 00:07:55.650 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.650 http://cunit.sourceforge.net/ 00:07:55.650 00:07:55.650 00:07:55.650 Suite: accel_dif 00:07:55.650 Test: verify: DIF generated, GUARD check ...passed 00:07:55.650 Test: verify: DIF generated, APPTAG check ...passed 00:07:55.650 Test: verify: DIF generated, REFTAG check ...passed 00:07:55.650 Test: verify: DIF not generated, GUARD check ...passed 00:07:55.650 Test: verify: DIF not generated, APPTAG check ...passed 00:07:55.650 Test: verify: DIF not generated, REFTAG check ...passed 00:07:55.650 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:55.650 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:55.650 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:55.650 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:55.650 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:55.650 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 00:16:42.802419] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:55.650 [2024-07-13 00:16:42.802494] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:55.651 [2024-07-13 00:16:42.802531] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:55.651 [2024-07-13 00:16:42.802561] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:55.651 [2024-07-13 00:16:42.802586] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:55.651 [2024-07-13 00:16:42.802624] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:55.651 [2024-07-13 00:16:42.802685] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:55.651 [2024-07-13 00:16:42.802829] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:55.651 passed 00:07:55.651 Test: generate copy: DIF generated, GUARD check ...passed 00:07:55.651 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:55.651 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:55.651 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:55.651 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:55.651 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:55.651 Test: generate copy: iovecs-len validate ...passed 00:07:55.651 Test: generate copy: buffer alignment validate ...passed 00:07:55.651 00:07:55.651 [2024-07-13 00:16:42.803065] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:55.651 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.651 suites 1 1 n/a 0 0 00:07:55.651 tests 20 20 20 0 0 00:07:55.651 asserts 204 204 204 0 n/a 00:07:55.651 00:07:55.651 Elapsed time = 0.002 seconds 00:07:55.907 00:07:55.907 real 0m0.626s 00:07:55.907 user 0m0.819s 00:07:55.907 sys 0m0.151s 00:07:55.907 00:16:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.907 00:16:43 -- common/autotest_common.sh@10 -- # set +x 00:07:55.907 ************************************ 00:07:55.907 END TEST accel_dif_functional_tests 00:07:55.907 ************************************ 00:07:55.907 00:07:55.907 real 1m3.122s 00:07:55.907 user 1m7.390s 00:07:55.907 sys 0m6.049s 00:07:55.907 00:16:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.907 ************************************ 00:07:55.907 END TEST accel 00:07:55.907 ************************************ 00:07:55.907 00:16:43 -- common/autotest_common.sh@10 -- # set +x 00:07:55.907 00:16:43 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:55.907 00:16:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:55.907 00:16:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.907 00:16:43 -- common/autotest_common.sh@10 -- # set +x 00:07:55.907 ************************************ 00:07:55.907 START TEST accel_rpc 00:07:55.907 ************************************ 00:07:55.907 00:16:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:56.164 * Looking for test storage... 00:07:56.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:56.164 00:16:43 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:56.164 00:16:43 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71459 00:07:56.164 00:16:43 -- accel/accel_rpc.sh@15 -- # waitforlisten 71459 00:07:56.164 00:16:43 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:56.164 00:16:43 -- common/autotest_common.sh@819 -- # '[' -z 71459 ']' 00:07:56.164 00:16:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.164 00:16:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:56.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.164 00:16:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.164 00:16:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:56.164 00:16:43 -- common/autotest_common.sh@10 -- # set +x 00:07:56.164 [2024-07-13 00:16:43.261676] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:56.164 [2024-07-13 00:16:43.261782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71459 ] 00:07:56.421 [2024-07-13 00:16:43.399804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.421 [2024-07-13 00:16:43.499029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.421 [2024-07-13 00:16:43.499269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.353 00:16:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:57.353 00:16:44 -- common/autotest_common.sh@852 -- # return 0 00:07:57.353 00:16:44 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:57.353 00:16:44 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:57.353 00:16:44 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:57.353 00:16:44 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:57.354 00:16:44 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:57.354 00:16:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.354 00:16:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.354 00:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:57.354 ************************************ 00:07:57.354 START TEST accel_assign_opcode 00:07:57.354 ************************************ 00:07:57.354 00:16:44 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:57.354 00:16:44 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:57.354 00:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.354 00:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:57.354 [2024-07-13 00:16:44.283914] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:57.354 00:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.354 00:16:44 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:57.354 00:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.354 00:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:57.354 [2024-07-13 00:16:44.291848] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:57.354 00:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.354 00:16:44 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:57.354 00:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.354 00:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:57.354 00:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.354 00:16:44 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:57.354 00:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.354 00:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:57.354 00:16:44 -- accel/accel_rpc.sh@42 -- # grep software 00:07:57.354 00:16:44 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:57.354 00:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.354 software 00:07:57.354 00:07:57.354 real 0m0.299s 00:07:57.354 user 0m0.055s 00:07:57.354 sys 0m0.009s 00:07:57.354 ************************************ 00:07:57.354 END TEST accel_assign_opcode 00:07:57.354 ************************************ 00:07:57.354 00:16:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.354 00:16:44 -- common/autotest_common.sh@10 -- # set +x 00:07:57.612 00:16:44 -- accel/accel_rpc.sh@55 -- # killprocess 71459 00:07:57.612 00:16:44 -- common/autotest_common.sh@926 -- # '[' -z 71459 ']' 00:07:57.612 00:16:44 -- common/autotest_common.sh@930 -- # kill -0 71459 00:07:57.612 00:16:44 -- common/autotest_common.sh@931 -- # uname 00:07:57.612 00:16:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:57.612 00:16:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71459 00:07:57.612 00:16:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:57.612 killing process with pid 71459 00:07:57.612 00:16:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:57.613 00:16:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71459' 00:07:57.613 00:16:44 -- common/autotest_common.sh@945 -- # kill 71459 00:07:57.613 00:16:44 -- common/autotest_common.sh@950 -- # wait 71459 00:07:57.871 00:07:57.871 real 0m1.890s 00:07:57.871 user 0m2.021s 00:07:57.871 sys 0m0.447s 00:07:57.871 00:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.871 00:16:45 -- common/autotest_common.sh@10 -- # set +x 00:07:57.871 ************************************ 00:07:57.871 END TEST accel_rpc 00:07:57.871 ************************************ 00:07:57.871 00:16:45 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:57.871 00:16:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.871 00:16:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.871 00:16:45 -- common/autotest_common.sh@10 -- # set +x 00:07:57.871 ************************************ 00:07:57.871 START TEST app_cmdline 00:07:57.871 ************************************ 00:07:57.871 00:16:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:58.129 * Looking for test storage... 00:07:58.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:58.129 00:16:45 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:58.129 00:16:45 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71569 00:07:58.129 00:16:45 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:58.129 00:16:45 -- app/cmdline.sh@18 -- # waitforlisten 71569 00:07:58.129 00:16:45 -- common/autotest_common.sh@819 -- # '[' -z 71569 ']' 00:07:58.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.129 00:16:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.129 00:16:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:58.129 00:16:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.129 00:16:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:58.129 00:16:45 -- common/autotest_common.sh@10 -- # set +x 00:07:58.129 [2024-07-13 00:16:45.220915] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:58.129 [2024-07-13 00:16:45.221077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71569 ] 00:07:58.387 [2024-07-13 00:16:45.365045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.387 [2024-07-13 00:16:45.451925] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.387 [2024-07-13 00:16:45.452089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.953 00:16:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:58.953 00:16:46 -- common/autotest_common.sh@852 -- # return 0 00:07:58.953 00:16:46 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:59.519 { 00:07:59.519 "fields": { 00:07:59.519 "commit": "4b94202c6", 00:07:59.519 "major": 24, 00:07:59.519 "minor": 1, 00:07:59.519 "patch": 1, 00:07:59.519 "suffix": "-pre" 00:07:59.519 }, 00:07:59.519 "version": "SPDK v24.01.1-pre git sha1 4b94202c6" 00:07:59.519 } 00:07:59.519 00:16:46 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:59.519 00:16:46 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:59.519 00:16:46 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:59.519 00:16:46 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:59.519 00:16:46 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:59.519 00:16:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.519 00:16:46 -- common/autotest_common.sh@10 -- # set +x 00:07:59.519 00:16:46 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:59.519 00:16:46 -- app/cmdline.sh@26 -- # sort 00:07:59.519 00:16:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.519 00:16:46 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:59.519 00:16:46 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:59.519 00:16:46 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.519 00:16:46 -- common/autotest_common.sh@640 -- # local es=0 00:07:59.519 00:16:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.519 00:16:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.519 00:16:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.519 00:16:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.519 00:16:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.519 00:16:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.519 00:16:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.519 00:16:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.519 00:16:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:59.519 00:16:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.519 2024/07/13 00:16:46 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:59.519 request: 00:07:59.519 { 00:07:59.519 "method": "env_dpdk_get_mem_stats", 00:07:59.519 "params": {} 00:07:59.519 } 00:07:59.519 Got JSON-RPC error response 00:07:59.519 GoRPCClient: error on JSON-RPC call 00:07:59.519 00:16:46 -- common/autotest_common.sh@643 -- # es=1 00:07:59.519 00:16:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:59.519 00:16:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:59.519 00:16:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:59.519 00:16:46 -- app/cmdline.sh@1 -- # killprocess 71569 00:07:59.519 00:16:46 -- common/autotest_common.sh@926 -- # '[' -z 71569 ']' 00:07:59.519 00:16:46 -- common/autotest_common.sh@930 -- # kill -0 71569 00:07:59.519 00:16:46 -- common/autotest_common.sh@931 -- # uname 00:07:59.777 00:16:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:59.777 00:16:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71569 00:07:59.777 00:16:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:59.777 00:16:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:59.777 killing process with pid 71569 00:07:59.777 00:16:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71569' 00:07:59.777 00:16:46 -- common/autotest_common.sh@945 -- # kill 71569 00:07:59.777 00:16:46 -- common/autotest_common.sh@950 -- # wait 71569 00:08:00.035 00:08:00.035 real 0m2.063s 00:08:00.035 user 0m2.557s 00:08:00.035 sys 0m0.502s 00:08:00.035 00:16:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.035 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:00.035 ************************************ 00:08:00.035 END TEST app_cmdline 00:08:00.035 ************************************ 00:08:00.035 00:16:47 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:00.035 00:16:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:00.035 00:16:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.035 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:00.035 ************************************ 00:08:00.035 START TEST version 00:08:00.035 ************************************ 00:08:00.035 00:16:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:00.035 * Looking for test storage... 00:08:00.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:00.035 00:16:47 -- app/version.sh@17 -- # get_header_version major 00:08:00.294 00:16:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:00.294 00:16:47 -- app/version.sh@14 -- # tr -d '"' 00:08:00.294 00:16:47 -- app/version.sh@14 -- # cut -f2 00:08:00.294 00:16:47 -- app/version.sh@17 -- # major=24 00:08:00.294 00:16:47 -- app/version.sh@18 -- # get_header_version minor 00:08:00.294 00:16:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:00.294 00:16:47 -- app/version.sh@14 -- # cut -f2 00:08:00.294 00:16:47 -- app/version.sh@14 -- # tr -d '"' 00:08:00.294 00:16:47 -- app/version.sh@18 -- # minor=1 00:08:00.294 00:16:47 -- app/version.sh@19 -- # get_header_version patch 00:08:00.294 00:16:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:00.294 00:16:47 -- app/version.sh@14 -- # cut -f2 00:08:00.294 00:16:47 -- app/version.sh@14 -- # tr -d '"' 00:08:00.294 00:16:47 -- app/version.sh@19 -- # patch=1 00:08:00.294 00:16:47 -- app/version.sh@20 -- # get_header_version suffix 00:08:00.294 00:16:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:00.294 00:16:47 -- app/version.sh@14 -- # cut -f2 00:08:00.294 00:16:47 -- app/version.sh@14 -- # tr -d '"' 00:08:00.294 00:16:47 -- app/version.sh@20 -- # suffix=-pre 00:08:00.294 00:16:47 -- app/version.sh@22 -- # version=24.1 00:08:00.294 00:16:47 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:00.294 00:16:47 -- app/version.sh@25 -- # version=24.1.1 00:08:00.294 00:16:47 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:00.294 00:16:47 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:00.294 00:16:47 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:00.294 00:16:47 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:00.294 00:16:47 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:00.294 00:08:00.294 real 0m0.148s 00:08:00.294 user 0m0.083s 00:08:00.294 sys 0m0.096s 00:08:00.294 00:16:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.294 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:00.294 ************************************ 00:08:00.294 END TEST version 00:08:00.294 ************************************ 00:08:00.294 00:16:47 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:08:00.294 00:16:47 -- spdk/autotest.sh@204 -- # uname -s 00:08:00.294 00:16:47 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:08:00.294 00:16:47 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:00.294 00:16:47 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:00.294 00:16:47 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:08:00.294 00:16:47 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:08:00.294 00:16:47 -- spdk/autotest.sh@268 -- # timing_exit lib 00:08:00.294 00:16:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:00.294 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:00.294 00:16:47 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:00.294 00:16:47 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:08:00.294 00:16:47 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:08:00.294 00:16:47 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:08:00.294 00:16:47 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:08:00.294 00:16:47 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:08:00.294 00:16:47 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:00.294 00:16:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:00.294 00:16:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.294 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:00.294 ************************************ 00:08:00.294 START TEST nvmf_tcp 00:08:00.294 ************************************ 00:08:00.294 00:16:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:00.294 * Looking for test storage... 00:08:00.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:00.294 00:16:47 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:00.294 00:16:47 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:00.295 00:16:47 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:00.295 00:16:47 -- nvmf/common.sh@7 -- # uname -s 00:08:00.295 00:16:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.295 00:16:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.295 00:16:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.553 00:16:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.553 00:16:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.553 00:16:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.553 00:16:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.553 00:16:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.553 00:16:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.553 00:16:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.553 00:16:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:08:00.553 00:16:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:08:00.553 00:16:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.553 00:16:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.553 00:16:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:00.553 00:16:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.553 00:16:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.553 00:16:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.553 00:16:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.554 00:16:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.554 00:16:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.554 00:16:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.554 00:16:47 -- paths/export.sh@5 -- # export PATH 00:08:00.554 00:16:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.554 00:16:47 -- nvmf/common.sh@46 -- # : 0 00:08:00.554 00:16:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:00.554 00:16:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:00.554 00:16:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:00.554 00:16:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.554 00:16:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.554 00:16:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:00.554 00:16:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:00.554 00:16:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:00.554 00:16:47 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:00.554 00:16:47 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:00.554 00:16:47 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:00.554 00:16:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:00.554 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:00.554 00:16:47 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:00.554 00:16:47 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:00.554 00:16:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:00.554 00:16:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.554 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:00.554 ************************************ 00:08:00.554 START TEST nvmf_example 00:08:00.554 ************************************ 00:08:00.554 00:16:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:00.554 * Looking for test storage... 00:08:00.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:00.554 00:16:47 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:00.554 00:16:47 -- nvmf/common.sh@7 -- # uname -s 00:08:00.554 00:16:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.554 00:16:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.554 00:16:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.554 00:16:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.554 00:16:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.554 00:16:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.554 00:16:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.554 00:16:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.554 00:16:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.554 00:16:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.554 00:16:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:08:00.554 00:16:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:08:00.554 00:16:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.554 00:16:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.554 00:16:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:00.554 00:16:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.554 00:16:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.554 00:16:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.554 00:16:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.554 00:16:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.554 00:16:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.554 00:16:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.554 00:16:47 -- paths/export.sh@5 -- # export PATH 00:08:00.554 00:16:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.554 00:16:47 -- nvmf/common.sh@46 -- # : 0 00:08:00.554 00:16:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:00.554 00:16:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:00.554 00:16:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:00.554 00:16:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.554 00:16:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.554 00:16:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:00.554 00:16:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:00.554 00:16:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:00.554 00:16:47 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:00.554 00:16:47 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:00.554 00:16:47 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:00.554 00:16:47 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:00.554 00:16:47 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:00.554 00:16:47 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:00.554 00:16:47 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:00.554 00:16:47 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:00.554 00:16:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:00.554 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:08:00.554 00:16:47 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:00.554 00:16:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:00.554 00:16:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.554 00:16:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:00.554 00:16:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:00.554 00:16:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:00.554 00:16:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.554 00:16:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.554 00:16:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.554 00:16:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:00.554 00:16:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:00.554 00:16:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:00.554 00:16:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:00.554 00:16:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:00.554 00:16:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:00.554 00:16:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.554 00:16:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.554 00:16:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:00.554 00:16:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:00.554 00:16:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:00.554 00:16:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:00.554 00:16:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:00.554 00:16:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.554 00:16:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:00.554 00:16:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:00.554 00:16:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:00.554 00:16:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:00.554 00:16:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:00.554 Cannot find device "nvmf_init_br" 00:08:00.554 00:16:47 -- nvmf/common.sh@153 -- # true 00:08:00.554 00:16:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:00.554 Cannot find device "nvmf_tgt_br" 00:08:00.554 00:16:47 -- nvmf/common.sh@154 -- # true 00:08:00.554 00:16:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:00.554 Cannot find device "nvmf_tgt_br2" 00:08:00.554 00:16:47 -- nvmf/common.sh@155 -- # true 00:08:00.554 00:16:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:00.554 Cannot find device "nvmf_init_br" 00:08:00.554 00:16:47 -- nvmf/common.sh@156 -- # true 00:08:00.554 00:16:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:00.554 Cannot find device "nvmf_tgt_br" 00:08:00.554 00:16:47 -- nvmf/common.sh@157 -- # true 00:08:00.554 00:16:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:00.554 Cannot find device "nvmf_tgt_br2" 00:08:00.554 00:16:47 -- nvmf/common.sh@158 -- # true 00:08:00.554 00:16:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:00.554 Cannot find device "nvmf_br" 00:08:00.554 00:16:47 -- nvmf/common.sh@159 -- # true 00:08:00.554 00:16:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:00.554 Cannot find device "nvmf_init_if" 00:08:00.554 00:16:47 -- nvmf/common.sh@160 -- # true 00:08:00.554 00:16:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:00.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.554 00:16:47 -- nvmf/common.sh@161 -- # true 00:08:00.554 00:16:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:00.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.554 00:16:47 -- nvmf/common.sh@162 -- # true 00:08:00.554 00:16:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:00.554 00:16:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:00.813 00:16:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:00.813 00:16:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:00.813 00:16:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:00.813 00:16:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:00.813 00:16:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:00.813 00:16:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:00.813 00:16:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:00.813 00:16:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:00.813 00:16:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:00.813 00:16:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:00.813 00:16:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:00.813 00:16:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:00.813 00:16:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:00.813 00:16:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:00.813 00:16:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:00.813 00:16:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:00.813 00:16:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:00.813 00:16:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:00.813 00:16:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:00.813 00:16:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:00.813 00:16:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:00.813 00:16:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:00.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:08:00.813 00:08:00.813 --- 10.0.0.2 ping statistics --- 00:08:00.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.813 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:01.071 00:16:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:01.071 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:01.071 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:08:01.071 00:08:01.071 --- 10.0.0.3 ping statistics --- 00:08:01.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.071 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:08:01.071 00:16:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:01.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:01.071 00:08:01.071 --- 10.0.0.1 ping statistics --- 00:08:01.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.072 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:01.072 00:16:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.072 00:16:48 -- nvmf/common.sh@421 -- # return 0 00:08:01.072 00:16:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:01.072 00:16:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.072 00:16:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:01.072 00:16:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:01.072 00:16:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.072 00:16:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:01.072 00:16:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:01.072 00:16:48 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:01.072 00:16:48 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:01.072 00:16:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:01.072 00:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:01.072 00:16:48 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:01.072 00:16:48 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:01.072 00:16:48 -- target/nvmf_example.sh@34 -- # nvmfpid=71925 00:08:01.072 00:16:48 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:01.072 00:16:48 -- target/nvmf_example.sh@36 -- # waitforlisten 71925 00:08:01.072 00:16:48 -- common/autotest_common.sh@819 -- # '[' -z 71925 ']' 00:08:01.072 00:16:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.072 00:16:48 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:01.072 00:16:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:01.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.072 00:16:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.072 00:16:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:01.072 00:16:48 -- common/autotest_common.sh@10 -- # set +x 00:08:02.007 00:16:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:02.007 00:16:49 -- common/autotest_common.sh@852 -- # return 0 00:08:02.007 00:16:49 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:02.007 00:16:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:02.007 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:08:02.007 00:16:49 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.007 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.007 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:08:02.007 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.007 00:16:49 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:02.007 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.007 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:08:02.007 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.007 00:16:49 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:02.007 00:16:49 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.007 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.007 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:08:02.007 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.007 00:16:49 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:02.007 00:16:49 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.007 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.007 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:08:02.007 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.007 00:16:49 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.007 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.007 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:08:02.007 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.007 00:16:49 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:08:02.007 00:16:49 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:14.230 Initializing NVMe Controllers 00:08:14.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:14.230 Initialization complete. Launching workers. 00:08:14.230 ======================================================== 00:08:14.230 Latency(us) 00:08:14.230 Device Information : IOPS MiB/s Average min max 00:08:14.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14476.60 56.55 4423.01 767.56 24885.18 00:08:14.230 ======================================================== 00:08:14.230 Total : 14476.60 56.55 4423.01 767.56 24885.18 00:08:14.230 00:08:14.230 00:16:59 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:14.230 00:16:59 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:14.230 00:16:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:14.230 00:16:59 -- nvmf/common.sh@116 -- # sync 00:08:14.230 00:16:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:14.230 00:16:59 -- nvmf/common.sh@119 -- # set +e 00:08:14.230 00:16:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:14.230 00:16:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:14.230 rmmod nvme_tcp 00:08:14.230 rmmod nvme_fabrics 00:08:14.230 rmmod nvme_keyring 00:08:14.230 00:16:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:14.230 00:16:59 -- nvmf/common.sh@123 -- # set -e 00:08:14.230 00:16:59 -- nvmf/common.sh@124 -- # return 0 00:08:14.230 00:16:59 -- nvmf/common.sh@477 -- # '[' -n 71925 ']' 00:08:14.230 00:16:59 -- nvmf/common.sh@478 -- # killprocess 71925 00:08:14.230 00:16:59 -- common/autotest_common.sh@926 -- # '[' -z 71925 ']' 00:08:14.230 00:16:59 -- common/autotest_common.sh@930 -- # kill -0 71925 00:08:14.230 00:16:59 -- common/autotest_common.sh@931 -- # uname 00:08:14.230 00:16:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:14.230 00:16:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71925 00:08:14.230 00:16:59 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:14.231 00:16:59 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:14.231 killing process with pid 71925 00:08:14.231 00:16:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71925' 00:08:14.231 00:16:59 -- common/autotest_common.sh@945 -- # kill 71925 00:08:14.231 00:16:59 -- common/autotest_common.sh@950 -- # wait 71925 00:08:14.231 nvmf threads initialize successfully 00:08:14.231 bdev subsystem init successfully 00:08:14.231 created a nvmf target service 00:08:14.231 create targets's poll groups done 00:08:14.231 all subsystems of target started 00:08:14.231 nvmf target is running 00:08:14.231 all subsystems of target stopped 00:08:14.231 destroy targets's poll groups done 00:08:14.231 destroyed the nvmf target service 00:08:14.231 bdev subsystem finish successfully 00:08:14.231 nvmf threads destroy successfully 00:08:14.231 00:16:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:14.231 00:16:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:14.231 00:16:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:14.231 00:16:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.231 00:16:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:14.231 00:16:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.231 00:16:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.231 00:16:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.231 00:16:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:14.231 00:16:59 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:14.231 00:16:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:14.231 00:16:59 -- common/autotest_common.sh@10 -- # set +x 00:08:14.231 00:08:14.231 real 0m12.474s 00:08:14.231 user 0m44.633s 00:08:14.231 sys 0m2.115s 00:08:14.231 00:17:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.231 ************************************ 00:08:14.231 END TEST nvmf_example 00:08:14.231 ************************************ 00:08:14.231 00:17:00 -- common/autotest_common.sh@10 -- # set +x 00:08:14.231 00:17:00 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:14.231 00:17:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:14.231 00:17:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.231 00:17:00 -- common/autotest_common.sh@10 -- # set +x 00:08:14.231 ************************************ 00:08:14.231 START TEST nvmf_filesystem 00:08:14.231 ************************************ 00:08:14.231 00:17:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:14.231 * Looking for test storage... 00:08:14.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.231 00:17:00 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:14.231 00:17:00 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:14.231 00:17:00 -- common/autotest_common.sh@34 -- # set -e 00:08:14.231 00:17:00 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:14.231 00:17:00 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:14.231 00:17:00 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:14.231 00:17:00 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:14.231 00:17:00 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:14.231 00:17:00 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:14.231 00:17:00 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:14.231 00:17:00 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:14.231 00:17:00 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:14.231 00:17:00 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:14.231 00:17:00 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:14.231 00:17:00 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:14.231 00:17:00 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:14.231 00:17:00 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:14.231 00:17:00 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:14.231 00:17:00 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:14.231 00:17:00 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:14.231 00:17:00 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:14.231 00:17:00 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:14.231 00:17:00 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:14.231 00:17:00 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:14.231 00:17:00 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:14.231 00:17:00 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:14.231 00:17:00 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:14.231 00:17:00 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:14.231 00:17:00 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:14.231 00:17:00 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:14.231 00:17:00 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:14.231 00:17:00 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:14.231 00:17:00 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:14.231 00:17:00 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:14.231 00:17:00 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:14.231 00:17:00 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:14.231 00:17:00 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:14.231 00:17:00 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:14.231 00:17:00 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:14.231 00:17:00 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:14.231 00:17:00 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:14.231 00:17:00 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:14.231 00:17:00 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:14.231 00:17:00 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:14.231 00:17:00 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:14.231 00:17:00 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:14.231 00:17:00 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:14.231 00:17:00 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:14.231 00:17:00 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:14.231 00:17:00 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:14.231 00:17:00 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:14.231 00:17:00 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:14.231 00:17:00 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:14.231 00:17:00 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:14.231 00:17:00 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:14.231 00:17:00 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:14.231 00:17:00 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:14.231 00:17:00 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:14.231 00:17:00 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:14.231 00:17:00 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:14.231 00:17:00 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:14.231 00:17:00 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:14.231 00:17:00 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:14.231 00:17:00 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:14.231 00:17:00 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:14.231 00:17:00 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:14.231 00:17:00 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:14.231 00:17:00 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:14.231 00:17:00 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:14.231 00:17:00 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:14.231 00:17:00 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:14.231 00:17:00 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:14.231 00:17:00 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:14.231 00:17:00 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:14.231 00:17:00 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:14.231 00:17:00 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:14.231 00:17:00 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:14.231 00:17:00 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:14.231 00:17:00 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:14.231 00:17:00 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:14.231 00:17:00 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:14.231 00:17:00 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:14.231 00:17:00 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:14.231 00:17:00 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:14.231 00:17:00 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:14.231 00:17:00 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:14.231 00:17:00 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:14.231 00:17:00 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:14.231 00:17:00 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:14.231 00:17:00 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:14.231 00:17:00 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:14.231 00:17:00 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:14.231 00:17:00 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:14.231 00:17:00 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:14.231 00:17:00 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:14.231 00:17:00 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:14.231 00:17:00 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:14.231 00:17:00 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:14.231 00:17:00 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:14.231 00:17:00 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:14.231 00:17:00 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:14.231 00:17:00 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:14.231 #define SPDK_CONFIG_H 00:08:14.231 #define SPDK_CONFIG_APPS 1 00:08:14.231 #define SPDK_CONFIG_ARCH native 00:08:14.231 #undef SPDK_CONFIG_ASAN 00:08:14.231 #define SPDK_CONFIG_AVAHI 1 00:08:14.231 #undef SPDK_CONFIG_CET 00:08:14.231 #define SPDK_CONFIG_COVERAGE 1 00:08:14.231 #define SPDK_CONFIG_CROSS_PREFIX 00:08:14.231 #undef SPDK_CONFIG_CRYPTO 00:08:14.231 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:14.231 #undef SPDK_CONFIG_CUSTOMOCF 00:08:14.231 #undef SPDK_CONFIG_DAOS 00:08:14.231 #define SPDK_CONFIG_DAOS_DIR 00:08:14.231 #define SPDK_CONFIG_DEBUG 1 00:08:14.231 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:14.231 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:14.231 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:14.231 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:14.231 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:14.231 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:14.231 #define SPDK_CONFIG_EXAMPLES 1 00:08:14.231 #undef SPDK_CONFIG_FC 00:08:14.231 #define SPDK_CONFIG_FC_PATH 00:08:14.231 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:14.231 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:14.231 #undef SPDK_CONFIG_FUSE 00:08:14.231 #undef SPDK_CONFIG_FUZZER 00:08:14.231 #define SPDK_CONFIG_FUZZER_LIB 00:08:14.231 #define SPDK_CONFIG_GOLANG 1 00:08:14.231 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:14.231 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:14.231 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:14.231 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:14.231 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:14.231 #define SPDK_CONFIG_IDXD 1 00:08:14.231 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:14.231 #undef SPDK_CONFIG_IPSEC_MB 00:08:14.231 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:14.231 #define SPDK_CONFIG_ISAL 1 00:08:14.231 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:14.231 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:14.231 #define SPDK_CONFIG_LIBDIR 00:08:14.231 #undef SPDK_CONFIG_LTO 00:08:14.231 #define SPDK_CONFIG_MAX_LCORES 00:08:14.231 #define SPDK_CONFIG_NVME_CUSE 1 00:08:14.231 #undef SPDK_CONFIG_OCF 00:08:14.231 #define SPDK_CONFIG_OCF_PATH 00:08:14.231 #define SPDK_CONFIG_OPENSSL_PATH 00:08:14.231 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:14.231 #undef SPDK_CONFIG_PGO_USE 00:08:14.231 #define SPDK_CONFIG_PREFIX /usr/local 00:08:14.231 #undef SPDK_CONFIG_RAID5F 00:08:14.231 #undef SPDK_CONFIG_RBD 00:08:14.231 #define SPDK_CONFIG_RDMA 1 00:08:14.231 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:14.231 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:14.231 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:14.231 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:14.231 #define SPDK_CONFIG_SHARED 1 00:08:14.231 #undef SPDK_CONFIG_SMA 00:08:14.231 #define SPDK_CONFIG_TESTS 1 00:08:14.231 #undef SPDK_CONFIG_TSAN 00:08:14.231 #define SPDK_CONFIG_UBLK 1 00:08:14.231 #define SPDK_CONFIG_UBSAN 1 00:08:14.231 #undef SPDK_CONFIG_UNIT_TESTS 00:08:14.231 #undef SPDK_CONFIG_URING 00:08:14.231 #define SPDK_CONFIG_URING_PATH 00:08:14.231 #undef SPDK_CONFIG_URING_ZNS 00:08:14.231 #define SPDK_CONFIG_USDT 1 00:08:14.231 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:14.231 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:14.231 #undef SPDK_CONFIG_VFIO_USER 00:08:14.231 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:14.231 #define SPDK_CONFIG_VHOST 1 00:08:14.231 #define SPDK_CONFIG_VIRTIO 1 00:08:14.231 #undef SPDK_CONFIG_VTUNE 00:08:14.231 #define SPDK_CONFIG_VTUNE_DIR 00:08:14.231 #define SPDK_CONFIG_WERROR 1 00:08:14.231 #define SPDK_CONFIG_WPDK_DIR 00:08:14.231 #undef SPDK_CONFIG_XNVME 00:08:14.231 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:14.231 00:17:00 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:14.231 00:17:00 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.231 00:17:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.231 00:17:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.231 00:17:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.231 00:17:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.231 00:17:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.231 00:17:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.231 00:17:00 -- paths/export.sh@5 -- # export PATH 00:08:14.231 00:17:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.231 00:17:00 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:14.231 00:17:00 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:14.231 00:17:00 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:14.231 00:17:00 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:14.231 00:17:00 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:14.231 00:17:00 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:14.231 00:17:00 -- pm/common@16 -- # TEST_TAG=N/A 00:08:14.231 00:17:00 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:14.231 00:17:00 -- common/autotest_common.sh@52 -- # : 1 00:08:14.231 00:17:00 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:14.231 00:17:00 -- common/autotest_common.sh@56 -- # : 0 00:08:14.231 00:17:00 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:14.231 00:17:00 -- common/autotest_common.sh@58 -- # : 0 00:08:14.231 00:17:00 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:14.231 00:17:00 -- common/autotest_common.sh@60 -- # : 1 00:08:14.231 00:17:00 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:14.231 00:17:00 -- common/autotest_common.sh@62 -- # : 0 00:08:14.231 00:17:00 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:14.231 00:17:00 -- common/autotest_common.sh@64 -- # : 00:08:14.231 00:17:00 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:14.231 00:17:00 -- common/autotest_common.sh@66 -- # : 0 00:08:14.231 00:17:00 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:14.231 00:17:00 -- common/autotest_common.sh@68 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:14.232 00:17:00 -- common/autotest_common.sh@70 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:14.232 00:17:00 -- common/autotest_common.sh@72 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:14.232 00:17:00 -- common/autotest_common.sh@74 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:14.232 00:17:00 -- common/autotest_common.sh@76 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:14.232 00:17:00 -- common/autotest_common.sh@78 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:14.232 00:17:00 -- common/autotest_common.sh@80 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:14.232 00:17:00 -- common/autotest_common.sh@82 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:14.232 00:17:00 -- common/autotest_common.sh@84 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:14.232 00:17:00 -- common/autotest_common.sh@86 -- # : 1 00:08:14.232 00:17:00 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:14.232 00:17:00 -- common/autotest_common.sh@88 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:14.232 00:17:00 -- common/autotest_common.sh@90 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:14.232 00:17:00 -- common/autotest_common.sh@92 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:14.232 00:17:00 -- common/autotest_common.sh@94 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:14.232 00:17:00 -- common/autotest_common.sh@96 -- # : tcp 00:08:14.232 00:17:00 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:14.232 00:17:00 -- common/autotest_common.sh@98 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:14.232 00:17:00 -- common/autotest_common.sh@100 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:14.232 00:17:00 -- common/autotest_common.sh@102 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:14.232 00:17:00 -- common/autotest_common.sh@104 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:14.232 00:17:00 -- common/autotest_common.sh@106 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:14.232 00:17:00 -- common/autotest_common.sh@108 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:14.232 00:17:00 -- common/autotest_common.sh@110 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:14.232 00:17:00 -- common/autotest_common.sh@112 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:14.232 00:17:00 -- common/autotest_common.sh@114 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:14.232 00:17:00 -- common/autotest_common.sh@116 -- # : 1 00:08:14.232 00:17:00 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:14.232 00:17:00 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:14.232 00:17:00 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:14.232 00:17:00 -- common/autotest_common.sh@120 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:14.232 00:17:00 -- common/autotest_common.sh@122 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:14.232 00:17:00 -- common/autotest_common.sh@124 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:14.232 00:17:00 -- common/autotest_common.sh@126 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:14.232 00:17:00 -- common/autotest_common.sh@128 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:14.232 00:17:00 -- common/autotest_common.sh@130 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:14.232 00:17:00 -- common/autotest_common.sh@132 -- # : v23.11 00:08:14.232 00:17:00 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:14.232 00:17:00 -- common/autotest_common.sh@134 -- # : true 00:08:14.232 00:17:00 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:14.232 00:17:00 -- common/autotest_common.sh@136 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:14.232 00:17:00 -- common/autotest_common.sh@138 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:14.232 00:17:00 -- common/autotest_common.sh@140 -- # : 1 00:08:14.232 00:17:00 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:14.232 00:17:00 -- common/autotest_common.sh@142 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:14.232 00:17:00 -- common/autotest_common.sh@144 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:14.232 00:17:00 -- common/autotest_common.sh@146 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:14.232 00:17:00 -- common/autotest_common.sh@148 -- # : 00:08:14.232 00:17:00 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:14.232 00:17:00 -- common/autotest_common.sh@150 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:14.232 00:17:00 -- common/autotest_common.sh@152 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:14.232 00:17:00 -- common/autotest_common.sh@154 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:14.232 00:17:00 -- common/autotest_common.sh@156 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:14.232 00:17:00 -- common/autotest_common.sh@158 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:14.232 00:17:00 -- common/autotest_common.sh@160 -- # : 0 00:08:14.232 00:17:00 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:14.232 00:17:00 -- common/autotest_common.sh@163 -- # : 00:08:14.232 00:17:00 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:14.232 00:17:00 -- common/autotest_common.sh@165 -- # : 1 00:08:14.232 00:17:00 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:14.232 00:17:00 -- common/autotest_common.sh@167 -- # : 1 00:08:14.232 00:17:00 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:14.232 00:17:00 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:14.232 00:17:00 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:14.232 00:17:00 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:14.232 00:17:00 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:14.232 00:17:00 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:14.232 00:17:00 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:14.232 00:17:00 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:14.232 00:17:00 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:14.232 00:17:00 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:14.232 00:17:00 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:14.232 00:17:00 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:14.232 00:17:00 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:14.232 00:17:00 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:14.232 00:17:00 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:14.232 00:17:00 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:14.232 00:17:00 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:14.232 00:17:00 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:14.232 00:17:00 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:14.232 00:17:00 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:14.232 00:17:00 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:14.232 00:17:00 -- common/autotest_common.sh@196 -- # cat 00:08:14.232 00:17:00 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:14.232 00:17:00 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:14.232 00:17:00 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:14.232 00:17:00 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:14.232 00:17:00 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:14.232 00:17:00 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:14.232 00:17:00 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:14.232 00:17:00 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:14.232 00:17:00 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:14.232 00:17:00 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:14.232 00:17:00 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:14.232 00:17:00 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:14.232 00:17:00 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:14.232 00:17:00 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:14.232 00:17:00 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:14.232 00:17:00 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:14.232 00:17:00 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:14.232 00:17:00 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:14.232 00:17:00 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:14.232 00:17:00 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:14.232 00:17:00 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:14.232 00:17:00 -- common/autotest_common.sh@249 -- # valgrind= 00:08:14.232 00:17:00 -- common/autotest_common.sh@255 -- # uname -s 00:08:14.232 00:17:00 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:14.232 00:17:00 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:14.232 00:17:00 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:14.232 00:17:00 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:14.232 00:17:00 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:14.232 00:17:00 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:14.232 00:17:00 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:14.232 00:17:00 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:08:14.232 00:17:00 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:14.232 00:17:00 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:14.232 00:17:00 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:14.232 00:17:00 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:14.232 00:17:00 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:14.232 00:17:00 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:14.232 00:17:00 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:14.232 00:17:00 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:14.232 00:17:00 -- common/autotest_common.sh@309 -- # [[ -z 72167 ]] 00:08:14.232 00:17:00 -- common/autotest_common.sh@309 -- # kill -0 72167 00:08:14.232 00:17:00 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:14.232 00:17:00 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:14.232 00:17:00 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:14.232 00:17:00 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:14.232 00:17:00 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:14.232 00:17:00 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:14.232 00:17:00 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:14.232 00:17:00 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:14.232 00:17:00 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.P0aHHV 00:08:14.232 00:17:00 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:14.232 00:17:00 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:14.232 00:17:00 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:14.232 00:17:00 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.P0aHHV/tests/target /tmp/spdk.P0aHHV 00:08:14.232 00:17:00 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:14.232 00:17:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.232 00:17:00 -- common/autotest_common.sh@318 -- # df -T 00:08:14.232 00:17:00 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=4194304 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4194304 00:08:14.232 00:17:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:14.232 00:17:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=6266630144 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267887616 00:08:14.232 00:17:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:08:14.232 00:17:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=2494353408 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=2507157504 00:08:14.232 00:17:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=12804096 00:08:14.232 00:17:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=13071675392 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:08:14.232 00:17:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=5973315584 00:08:14.232 00:17:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=13071675392 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:08:14.232 00:17:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=5973315584 00:08:14.232 00:17:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267752448 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267891712 00:08:14.232 00:17:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=139264 00:08:14.232 00:17:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda2 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=843546624 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1012768768 00:08:14.232 00:17:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=100016128 00:08:14.232 00:17:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda3 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=92499968 00:08:14.232 00:17:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=104607744 00:08:14.232 00:17:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=12107776 00:08:14.232 00:17:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:14.232 00:17:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:14.233 00:17:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253572608 00:08:14.233 00:17:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253576704 00:08:14.233 00:17:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:14.233 00:17:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.233 00:17:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:08:14.233 00:17:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:08:14.233 00:17:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=95494725632 00:08:14.233 00:17:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:08:14.233 00:17:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=4208054272 00:08:14.233 00:17:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.233 00:17:00 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:14.233 * Looking for test storage... 00:08:14.233 00:17:00 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:14.233 00:17:00 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:14.233 00:17:00 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.233 00:17:00 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:14.233 00:17:00 -- common/autotest_common.sh@363 -- # mount=/home 00:08:14.233 00:17:00 -- common/autotest_common.sh@365 -- # target_space=13071675392 00:08:14.233 00:17:00 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:14.233 00:17:00 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:14.233 00:17:00 -- common/autotest_common.sh@371 -- # [[ btrfs == tmpfs ]] 00:08:14.233 00:17:00 -- common/autotest_common.sh@371 -- # [[ btrfs == ramfs ]] 00:08:14.233 00:17:00 -- common/autotest_common.sh@371 -- # [[ /home == / ]] 00:08:14.233 00:17:00 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.233 00:17:00 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.233 00:17:00 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.233 00:17:00 -- common/autotest_common.sh@380 -- # return 0 00:08:14.233 00:17:00 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:14.233 00:17:00 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:14.233 00:17:00 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:14.233 00:17:00 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:14.233 00:17:00 -- common/autotest_common.sh@1672 -- # true 00:08:14.233 00:17:00 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:14.233 00:17:00 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:14.233 00:17:00 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:14.233 00:17:00 -- common/autotest_common.sh@27 -- # exec 00:08:14.233 00:17:00 -- common/autotest_common.sh@29 -- # exec 00:08:14.233 00:17:00 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:14.233 00:17:00 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:14.233 00:17:00 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:14.233 00:17:00 -- common/autotest_common.sh@18 -- # set -x 00:08:14.233 00:17:00 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.233 00:17:00 -- nvmf/common.sh@7 -- # uname -s 00:08:14.233 00:17:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.233 00:17:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.233 00:17:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.233 00:17:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.233 00:17:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.233 00:17:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.233 00:17:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.233 00:17:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.233 00:17:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.233 00:17:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.233 00:17:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:08:14.233 00:17:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:08:14.233 00:17:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.233 00:17:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.233 00:17:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.233 00:17:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.233 00:17:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.233 00:17:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.233 00:17:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.233 00:17:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.233 00:17:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.233 00:17:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.233 00:17:00 -- paths/export.sh@5 -- # export PATH 00:08:14.233 00:17:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.233 00:17:00 -- nvmf/common.sh@46 -- # : 0 00:08:14.233 00:17:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:14.233 00:17:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:14.233 00:17:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:14.233 00:17:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.233 00:17:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.233 00:17:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:14.233 00:17:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:14.233 00:17:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:14.233 00:17:00 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:14.233 00:17:00 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:14.233 00:17:00 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:14.233 00:17:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:14.233 00:17:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.233 00:17:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:14.233 00:17:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:14.233 00:17:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:14.233 00:17:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.233 00:17:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.233 00:17:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.233 00:17:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:14.233 00:17:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:14.233 00:17:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:14.233 00:17:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:14.233 00:17:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:14.233 00:17:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:14.233 00:17:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.233 00:17:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.233 00:17:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:14.233 00:17:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:14.233 00:17:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.233 00:17:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.233 00:17:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.233 00:17:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.233 00:17:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.233 00:17:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.233 00:17:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.233 00:17:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.233 00:17:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:14.233 00:17:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:14.233 Cannot find device "nvmf_tgt_br" 00:08:14.233 00:17:00 -- nvmf/common.sh@154 -- # true 00:08:14.233 00:17:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.233 Cannot find device "nvmf_tgt_br2" 00:08:14.233 00:17:00 -- nvmf/common.sh@155 -- # true 00:08:14.233 00:17:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:14.233 00:17:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:14.233 Cannot find device "nvmf_tgt_br" 00:08:14.233 00:17:00 -- nvmf/common.sh@157 -- # true 00:08:14.233 00:17:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:14.233 Cannot find device "nvmf_tgt_br2" 00:08:14.233 00:17:00 -- nvmf/common.sh@158 -- # true 00:08:14.233 00:17:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:14.233 00:17:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:14.233 00:17:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.233 00:17:00 -- nvmf/common.sh@161 -- # true 00:08:14.233 00:17:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.233 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.233 00:17:00 -- nvmf/common.sh@162 -- # true 00:08:14.233 00:17:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:14.233 00:17:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:14.233 00:17:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:14.233 00:17:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:14.233 00:17:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:14.233 00:17:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:14.233 00:17:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:14.233 00:17:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:14.233 00:17:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:14.233 00:17:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:14.233 00:17:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:14.233 00:17:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:14.233 00:17:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:14.233 00:17:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:14.233 00:17:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:14.233 00:17:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:14.233 00:17:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:14.233 00:17:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:14.233 00:17:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:14.233 00:17:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:14.233 00:17:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:14.233 00:17:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:14.233 00:17:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:14.233 00:17:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:14.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:08:14.233 00:08:14.233 --- 10.0.0.2 ping statistics --- 00:08:14.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.233 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:14.233 00:17:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:14.233 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:14.233 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:14.233 00:08:14.233 --- 10.0.0.3 ping statistics --- 00:08:14.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.233 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:14.233 00:17:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:14.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:08:14.233 00:08:14.233 --- 10.0.0.1 ping statistics --- 00:08:14.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.233 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:14.233 00:17:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.233 00:17:00 -- nvmf/common.sh@421 -- # return 0 00:08:14.233 00:17:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:14.233 00:17:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.233 00:17:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:14.233 00:17:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:14.233 00:17:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.233 00:17:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:14.233 00:17:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:14.233 00:17:00 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:14.233 00:17:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:14.233 00:17:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.233 00:17:00 -- common/autotest_common.sh@10 -- # set +x 00:08:14.233 ************************************ 00:08:14.233 START TEST nvmf_filesystem_no_in_capsule 00:08:14.233 ************************************ 00:08:14.233 00:17:00 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:14.233 00:17:00 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:14.233 00:17:00 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:14.233 00:17:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:14.233 00:17:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:14.233 00:17:00 -- common/autotest_common.sh@10 -- # set +x 00:08:14.233 00:17:00 -- nvmf/common.sh@469 -- # nvmfpid=72336 00:08:14.233 00:17:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.233 00:17:00 -- nvmf/common.sh@470 -- # waitforlisten 72336 00:08:14.233 00:17:00 -- common/autotest_common.sh@819 -- # '[' -z 72336 ']' 00:08:14.233 00:17:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.233 00:17:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:14.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.234 00:17:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.234 00:17:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:14.234 00:17:00 -- common/autotest_common.sh@10 -- # set +x 00:08:14.234 [2024-07-13 00:17:00.748297] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:14.234 [2024-07-13 00:17:00.748387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.234 [2024-07-13 00:17:00.889781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.234 [2024-07-13 00:17:00.997960] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.234 [2024-07-13 00:17:00.998140] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.234 [2024-07-13 00:17:00.998156] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.234 [2024-07-13 00:17:00.998167] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.234 [2024-07-13 00:17:00.998306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.234 [2024-07-13 00:17:00.998419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.234 [2024-07-13 00:17:00.998509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.234 [2024-07-13 00:17:00.998514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.800 00:17:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:14.800 00:17:01 -- common/autotest_common.sh@852 -- # return 0 00:08:14.800 00:17:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:14.800 00:17:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:14.800 00:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:14.800 00:17:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.800 00:17:01 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:14.800 00:17:01 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:14.800 00:17:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.800 00:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:14.800 [2024-07-13 00:17:01.858146] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.800 00:17:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.800 00:17:01 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:14.800 00:17:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.800 00:17:01 -- common/autotest_common.sh@10 -- # set +x 00:08:15.058 Malloc1 00:08:15.058 00:17:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.058 00:17:02 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:15.058 00:17:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.058 00:17:02 -- common/autotest_common.sh@10 -- # set +x 00:08:15.058 00:17:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.058 00:17:02 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:15.058 00:17:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.058 00:17:02 -- common/autotest_common.sh@10 -- # set +x 00:08:15.058 00:17:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.058 00:17:02 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.058 00:17:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.058 00:17:02 -- common/autotest_common.sh@10 -- # set +x 00:08:15.058 [2024-07-13 00:17:02.063836] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.058 00:17:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.058 00:17:02 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:15.058 00:17:02 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:15.058 00:17:02 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:15.058 00:17:02 -- common/autotest_common.sh@1359 -- # local bs 00:08:15.058 00:17:02 -- common/autotest_common.sh@1360 -- # local nb 00:08:15.058 00:17:02 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:15.058 00:17:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.058 00:17:02 -- common/autotest_common.sh@10 -- # set +x 00:08:15.058 00:17:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.058 00:17:02 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:15.058 { 00:08:15.058 "aliases": [ 00:08:15.058 "6b4cd744-2c65-4baa-a98d-89f93f9c7532" 00:08:15.058 ], 00:08:15.058 "assigned_rate_limits": { 00:08:15.058 "r_mbytes_per_sec": 0, 00:08:15.058 "rw_ios_per_sec": 0, 00:08:15.058 "rw_mbytes_per_sec": 0, 00:08:15.058 "w_mbytes_per_sec": 0 00:08:15.058 }, 00:08:15.058 "block_size": 512, 00:08:15.058 "claim_type": "exclusive_write", 00:08:15.058 "claimed": true, 00:08:15.058 "driver_specific": {}, 00:08:15.058 "memory_domains": [ 00:08:15.058 { 00:08:15.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.058 "dma_device_type": 2 00:08:15.058 } 00:08:15.058 ], 00:08:15.058 "name": "Malloc1", 00:08:15.058 "num_blocks": 1048576, 00:08:15.058 "product_name": "Malloc disk", 00:08:15.058 "supported_io_types": { 00:08:15.058 "abort": true, 00:08:15.058 "compare": false, 00:08:15.058 "compare_and_write": false, 00:08:15.058 "flush": true, 00:08:15.058 "nvme_admin": false, 00:08:15.058 "nvme_io": false, 00:08:15.058 "read": true, 00:08:15.058 "reset": true, 00:08:15.058 "unmap": true, 00:08:15.058 "write": true, 00:08:15.058 "write_zeroes": true 00:08:15.058 }, 00:08:15.058 "uuid": "6b4cd744-2c65-4baa-a98d-89f93f9c7532", 00:08:15.058 "zoned": false 00:08:15.058 } 00:08:15.058 ]' 00:08:15.058 00:17:02 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:15.058 00:17:02 -- common/autotest_common.sh@1362 -- # bs=512 00:08:15.058 00:17:02 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:15.058 00:17:02 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:15.058 00:17:02 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:15.058 00:17:02 -- common/autotest_common.sh@1367 -- # echo 512 00:08:15.058 00:17:02 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:15.058 00:17:02 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.316 00:17:02 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.316 00:17:02 -- common/autotest_common.sh@1177 -- # local i=0 00:08:15.316 00:17:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.316 00:17:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:15.316 00:17:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:17.213 00:17:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:17.213 00:17:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:17.213 00:17:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.213 00:17:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:17.213 00:17:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.213 00:17:04 -- common/autotest_common.sh@1187 -- # return 0 00:08:17.213 00:17:04 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:17.213 00:17:04 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:17.213 00:17:04 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:17.213 00:17:04 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:17.213 00:17:04 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:17.213 00:17:04 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:17.213 00:17:04 -- setup/common.sh@80 -- # echo 536870912 00:08:17.213 00:17:04 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:17.213 00:17:04 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:17.213 00:17:04 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:17.213 00:17:04 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:17.471 00:17:04 -- target/filesystem.sh@69 -- # partprobe 00:08:17.471 00:17:04 -- target/filesystem.sh@70 -- # sleep 1 00:08:18.405 00:17:05 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:18.406 00:17:05 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:18.406 00:17:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:18.406 00:17:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.406 00:17:05 -- common/autotest_common.sh@10 -- # set +x 00:08:18.406 ************************************ 00:08:18.406 START TEST filesystem_ext4 00:08:18.406 ************************************ 00:08:18.406 00:17:05 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:18.406 00:17:05 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:18.406 00:17:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.406 00:17:05 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:18.406 00:17:05 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:18.406 00:17:05 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:18.406 00:17:05 -- common/autotest_common.sh@904 -- # local i=0 00:08:18.406 00:17:05 -- common/autotest_common.sh@905 -- # local force 00:08:18.406 00:17:05 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:18.406 00:17:05 -- common/autotest_common.sh@908 -- # force=-F 00:08:18.406 00:17:05 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:18.406 mke2fs 1.46.5 (30-Dec-2021) 00:08:18.663 Discarding device blocks: 0/522240 done 00:08:18.663 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:18.663 Filesystem UUID: df2dd974-cce1-4803-adf9-85a5ac09e774 00:08:18.663 Superblock backups stored on blocks: 00:08:18.663 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:18.663 00:08:18.663 Allocating group tables: 0/64 done 00:08:18.663 Writing inode tables: 0/64 done 00:08:18.663 Creating journal (8192 blocks): done 00:08:18.663 Writing superblocks and filesystem accounting information: 0/64 done 00:08:18.663 00:08:18.663 00:17:05 -- common/autotest_common.sh@921 -- # return 0 00:08:18.663 00:17:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.921 00:17:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.921 00:17:05 -- target/filesystem.sh@25 -- # sync 00:08:18.921 00:17:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.921 00:17:06 -- target/filesystem.sh@27 -- # sync 00:08:18.921 00:17:06 -- target/filesystem.sh@29 -- # i=0 00:08:18.921 00:17:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.921 00:17:06 -- target/filesystem.sh@37 -- # kill -0 72336 00:08:18.921 00:17:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.921 00:17:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.921 00:17:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.921 00:17:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.921 ************************************ 00:08:18.921 END TEST filesystem_ext4 00:08:18.921 ************************************ 00:08:18.921 00:08:18.921 real 0m0.499s 00:08:18.921 user 0m0.026s 00:08:18.921 sys 0m0.063s 00:08:18.921 00:17:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.921 00:17:06 -- common/autotest_common.sh@10 -- # set +x 00:08:18.921 00:17:06 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:18.921 00:17:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:18.921 00:17:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.921 00:17:06 -- common/autotest_common.sh@10 -- # set +x 00:08:19.179 ************************************ 00:08:19.179 START TEST filesystem_btrfs 00:08:19.179 ************************************ 00:08:19.179 00:17:06 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:19.179 00:17:06 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:19.179 00:17:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.179 00:17:06 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:19.179 00:17:06 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:19.179 00:17:06 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:19.179 00:17:06 -- common/autotest_common.sh@904 -- # local i=0 00:08:19.179 00:17:06 -- common/autotest_common.sh@905 -- # local force 00:08:19.179 00:17:06 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:19.179 00:17:06 -- common/autotest_common.sh@910 -- # force=-f 00:08:19.179 00:17:06 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:19.179 btrfs-progs v6.6.2 00:08:19.179 See https://btrfs.readthedocs.io for more information. 00:08:19.179 00:08:19.179 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:19.179 NOTE: several default settings have changed in version 5.15, please make sure 00:08:19.179 this does not affect your deployments: 00:08:19.179 - DUP for metadata (-m dup) 00:08:19.179 - enabled no-holes (-O no-holes) 00:08:19.179 - enabled free-space-tree (-R free-space-tree) 00:08:19.179 00:08:19.179 Label: (null) 00:08:19.179 UUID: 102323bc-be6a-4cb1-8cfa-18bba89f1252 00:08:19.179 Node size: 16384 00:08:19.179 Sector size: 4096 00:08:19.179 Filesystem size: 510.00MiB 00:08:19.179 Block group profiles: 00:08:19.179 Data: single 8.00MiB 00:08:19.179 Metadata: DUP 32.00MiB 00:08:19.179 System: DUP 8.00MiB 00:08:19.179 SSD detected: yes 00:08:19.179 Zoned device: no 00:08:19.179 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:19.179 Runtime features: free-space-tree 00:08:19.179 Checksum: crc32c 00:08:19.179 Number of devices: 1 00:08:19.179 Devices: 00:08:19.179 ID SIZE PATH 00:08:19.179 1 510.00MiB /dev/nvme0n1p1 00:08:19.179 00:08:19.179 00:17:06 -- common/autotest_common.sh@921 -- # return 0 00:08:19.179 00:17:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.179 00:17:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.179 00:17:06 -- target/filesystem.sh@25 -- # sync 00:08:19.438 00:17:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.438 00:17:06 -- target/filesystem.sh@27 -- # sync 00:08:19.438 00:17:06 -- target/filesystem.sh@29 -- # i=0 00:08:19.438 00:17:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.438 00:17:06 -- target/filesystem.sh@37 -- # kill -0 72336 00:08:19.438 00:17:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.438 00:17:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.438 00:17:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.438 00:17:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.438 ************************************ 00:08:19.438 END TEST filesystem_btrfs 00:08:19.438 ************************************ 00:08:19.438 00:08:19.438 real 0m0.300s 00:08:19.438 user 0m0.022s 00:08:19.438 sys 0m0.076s 00:08:19.438 00:17:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.438 00:17:06 -- common/autotest_common.sh@10 -- # set +x 00:08:19.438 00:17:06 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:19.438 00:17:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:19.438 00:17:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.438 00:17:06 -- common/autotest_common.sh@10 -- # set +x 00:08:19.438 ************************************ 00:08:19.438 START TEST filesystem_xfs 00:08:19.438 ************************************ 00:08:19.438 00:17:06 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:19.438 00:17:06 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:19.438 00:17:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.438 00:17:06 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:19.438 00:17:06 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:19.438 00:17:06 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:19.438 00:17:06 -- common/autotest_common.sh@904 -- # local i=0 00:08:19.438 00:17:06 -- common/autotest_common.sh@905 -- # local force 00:08:19.438 00:17:06 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:19.438 00:17:06 -- common/autotest_common.sh@910 -- # force=-f 00:08:19.438 00:17:06 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:19.696 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:19.696 = sectsz=512 attr=2, projid32bit=1 00:08:19.696 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:19.696 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:19.696 data = bsize=4096 blocks=130560, imaxpct=25 00:08:19.696 = sunit=0 swidth=0 blks 00:08:19.696 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:19.696 log =internal log bsize=4096 blocks=16384, version=2 00:08:19.696 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:19.696 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:20.263 Discarding blocks...Done. 00:08:20.263 00:17:07 -- common/autotest_common.sh@921 -- # return 0 00:08:20.263 00:17:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.832 00:17:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.832 00:17:09 -- target/filesystem.sh@25 -- # sync 00:08:22.832 00:17:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.832 00:17:09 -- target/filesystem.sh@27 -- # sync 00:08:22.832 00:17:09 -- target/filesystem.sh@29 -- # i=0 00:08:22.832 00:17:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.832 00:17:09 -- target/filesystem.sh@37 -- # kill -0 72336 00:08:22.832 00:17:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.832 00:17:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.832 00:17:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.832 00:17:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.832 ************************************ 00:08:22.832 END TEST filesystem_xfs 00:08:22.832 ************************************ 00:08:22.832 00:08:22.832 real 0m3.293s 00:08:22.832 user 0m0.025s 00:08:22.832 sys 0m0.063s 00:08:22.832 00:17:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.832 00:17:09 -- common/autotest_common.sh@10 -- # set +x 00:08:22.832 00:17:09 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:22.832 00:17:09 -- target/filesystem.sh@93 -- # sync 00:08:22.832 00:17:09 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.832 00:17:09 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.832 00:17:09 -- common/autotest_common.sh@1198 -- # local i=0 00:08:22.832 00:17:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:22.832 00:17:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.832 00:17:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:22.832 00:17:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.832 00:17:09 -- common/autotest_common.sh@1210 -- # return 0 00:08:22.832 00:17:09 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.832 00:17:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:22.832 00:17:09 -- common/autotest_common.sh@10 -- # set +x 00:08:22.832 00:17:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:22.832 00:17:09 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:22.832 00:17:09 -- target/filesystem.sh@101 -- # killprocess 72336 00:08:22.832 00:17:09 -- common/autotest_common.sh@926 -- # '[' -z 72336 ']' 00:08:22.832 00:17:09 -- common/autotest_common.sh@930 -- # kill -0 72336 00:08:22.832 00:17:09 -- common/autotest_common.sh@931 -- # uname 00:08:22.832 00:17:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:22.832 00:17:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72336 00:08:22.832 killing process with pid 72336 00:08:22.832 00:17:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:22.832 00:17:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:22.832 00:17:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72336' 00:08:22.832 00:17:09 -- common/autotest_common.sh@945 -- # kill 72336 00:08:22.832 00:17:09 -- common/autotest_common.sh@950 -- # wait 72336 00:08:23.397 ************************************ 00:08:23.398 END TEST nvmf_filesystem_no_in_capsule 00:08:23.398 ************************************ 00:08:23.398 00:17:10 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:23.398 00:08:23.398 real 0m9.707s 00:08:23.398 user 0m37.249s 00:08:23.398 sys 0m1.362s 00:08:23.398 00:17:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.398 00:17:10 -- common/autotest_common.sh@10 -- # set +x 00:08:23.398 00:17:10 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:23.398 00:17:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:23.398 00:17:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.398 00:17:10 -- common/autotest_common.sh@10 -- # set +x 00:08:23.398 ************************************ 00:08:23.398 START TEST nvmf_filesystem_in_capsule 00:08:23.398 ************************************ 00:08:23.398 00:17:10 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:23.398 00:17:10 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:23.398 00:17:10 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:23.398 00:17:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:23.398 00:17:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:23.398 00:17:10 -- common/autotest_common.sh@10 -- # set +x 00:08:23.398 00:17:10 -- nvmf/common.sh@469 -- # nvmfpid=72648 00:08:23.398 00:17:10 -- nvmf/common.sh@470 -- # waitforlisten 72648 00:08:23.398 00:17:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.398 00:17:10 -- common/autotest_common.sh@819 -- # '[' -z 72648 ']' 00:08:23.398 00:17:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.398 00:17:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:23.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.398 00:17:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.398 00:17:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:23.398 00:17:10 -- common/autotest_common.sh@10 -- # set +x 00:08:23.398 [2024-07-13 00:17:10.516681] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:23.398 [2024-07-13 00:17:10.516797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.656 [2024-07-13 00:17:10.655137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.656 [2024-07-13 00:17:10.754277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:23.656 [2024-07-13 00:17:10.754420] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.656 [2024-07-13 00:17:10.754433] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.656 [2024-07-13 00:17:10.754442] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.656 [2024-07-13 00:17:10.754598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.656 [2024-07-13 00:17:10.754752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.656 [2024-07-13 00:17:10.754881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.656 [2024-07-13 00:17:10.754882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.591 00:17:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:24.591 00:17:11 -- common/autotest_common.sh@852 -- # return 0 00:08:24.591 00:17:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:24.591 00:17:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:24.591 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:24.591 00:17:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.591 00:17:11 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:24.591 00:17:11 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:24.591 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.591 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:24.591 [2024-07-13 00:17:11.530106] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.591 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.591 00:17:11 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:24.591 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.591 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:24.591 Malloc1 00:08:24.591 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.591 00:17:11 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:24.591 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.591 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:24.591 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.591 00:17:11 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:24.591 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.591 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:24.591 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.591 00:17:11 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.591 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.591 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:24.591 [2024-07-13 00:17:11.722866] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.591 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.591 00:17:11 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:24.591 00:17:11 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:24.591 00:17:11 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:24.591 00:17:11 -- common/autotest_common.sh@1359 -- # local bs 00:08:24.592 00:17:11 -- common/autotest_common.sh@1360 -- # local nb 00:08:24.592 00:17:11 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:24.592 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.592 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:08:24.592 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.592 00:17:11 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:24.592 { 00:08:24.592 "aliases": [ 00:08:24.592 "46b9e46a-073b-47a1-ba95-e75ca7f30b22" 00:08:24.592 ], 00:08:24.592 "assigned_rate_limits": { 00:08:24.592 "r_mbytes_per_sec": 0, 00:08:24.592 "rw_ios_per_sec": 0, 00:08:24.592 "rw_mbytes_per_sec": 0, 00:08:24.592 "w_mbytes_per_sec": 0 00:08:24.592 }, 00:08:24.592 "block_size": 512, 00:08:24.592 "claim_type": "exclusive_write", 00:08:24.592 "claimed": true, 00:08:24.592 "driver_specific": {}, 00:08:24.592 "memory_domains": [ 00:08:24.592 { 00:08:24.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.592 "dma_device_type": 2 00:08:24.592 } 00:08:24.592 ], 00:08:24.592 "name": "Malloc1", 00:08:24.592 "num_blocks": 1048576, 00:08:24.592 "product_name": "Malloc disk", 00:08:24.592 "supported_io_types": { 00:08:24.592 "abort": true, 00:08:24.592 "compare": false, 00:08:24.592 "compare_and_write": false, 00:08:24.592 "flush": true, 00:08:24.592 "nvme_admin": false, 00:08:24.592 "nvme_io": false, 00:08:24.592 "read": true, 00:08:24.592 "reset": true, 00:08:24.592 "unmap": true, 00:08:24.592 "write": true, 00:08:24.592 "write_zeroes": true 00:08:24.592 }, 00:08:24.592 "uuid": "46b9e46a-073b-47a1-ba95-e75ca7f30b22", 00:08:24.592 "zoned": false 00:08:24.592 } 00:08:24.592 ]' 00:08:24.592 00:17:11 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:24.592 00:17:11 -- common/autotest_common.sh@1362 -- # bs=512 00:08:24.592 00:17:11 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:24.850 00:17:11 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:24.850 00:17:11 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:24.850 00:17:11 -- common/autotest_common.sh@1367 -- # echo 512 00:08:24.850 00:17:11 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:24.850 00:17:11 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:24.850 00:17:12 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:24.850 00:17:12 -- common/autotest_common.sh@1177 -- # local i=0 00:08:24.850 00:17:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.850 00:17:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:24.850 00:17:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:27.379 00:17:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:27.379 00:17:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:27.379 00:17:14 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:27.379 00:17:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:27.379 00:17:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:27.379 00:17:14 -- common/autotest_common.sh@1187 -- # return 0 00:08:27.379 00:17:14 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:27.379 00:17:14 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:27.379 00:17:14 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:27.379 00:17:14 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:27.379 00:17:14 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:27.379 00:17:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:27.379 00:17:14 -- setup/common.sh@80 -- # echo 536870912 00:08:27.379 00:17:14 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:27.379 00:17:14 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:27.379 00:17:14 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:27.379 00:17:14 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:27.379 00:17:14 -- target/filesystem.sh@69 -- # partprobe 00:08:27.379 00:17:14 -- target/filesystem.sh@70 -- # sleep 1 00:08:27.945 00:17:15 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:27.945 00:17:15 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:27.945 00:17:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:27.945 00:17:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:27.945 00:17:15 -- common/autotest_common.sh@10 -- # set +x 00:08:27.945 ************************************ 00:08:27.945 START TEST filesystem_in_capsule_ext4 00:08:27.945 ************************************ 00:08:27.945 00:17:15 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:27.945 00:17:15 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:27.945 00:17:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.945 00:17:15 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:27.945 00:17:15 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:27.945 00:17:15 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:27.945 00:17:15 -- common/autotest_common.sh@904 -- # local i=0 00:08:27.945 00:17:15 -- common/autotest_common.sh@905 -- # local force 00:08:27.945 00:17:15 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:27.945 00:17:15 -- common/autotest_common.sh@908 -- # force=-F 00:08:27.945 00:17:15 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:27.945 mke2fs 1.46.5 (30-Dec-2021) 00:08:28.203 Discarding device blocks: 0/522240 done 00:08:28.203 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:28.203 Filesystem UUID: 46dae272-6519-494a-8ddb-a3657d367a65 00:08:28.203 Superblock backups stored on blocks: 00:08:28.203 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:28.203 00:08:28.203 Allocating group tables: 0/64 done 00:08:28.203 Writing inode tables: 0/64 done 00:08:28.203 Creating journal (8192 blocks): done 00:08:28.203 Writing superblocks and filesystem accounting information: 0/64 done 00:08:28.203 00:08:28.203 00:17:15 -- common/autotest_common.sh@921 -- # return 0 00:08:28.203 00:17:15 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.203 00:17:15 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.462 00:17:15 -- target/filesystem.sh@25 -- # sync 00:08:28.462 00:17:15 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.462 00:17:15 -- target/filesystem.sh@27 -- # sync 00:08:28.462 00:17:15 -- target/filesystem.sh@29 -- # i=0 00:08:28.462 00:17:15 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.462 00:17:15 -- target/filesystem.sh@37 -- # kill -0 72648 00:08:28.462 00:17:15 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.462 00:17:15 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.462 00:17:15 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.462 00:17:15 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.462 00:08:28.462 real 0m0.354s 00:08:28.462 user 0m0.021s 00:08:28.462 sys 0m0.060s 00:08:28.462 00:17:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.462 00:17:15 -- common/autotest_common.sh@10 -- # set +x 00:08:28.462 ************************************ 00:08:28.462 END TEST filesystem_in_capsule_ext4 00:08:28.462 ************************************ 00:08:28.462 00:17:15 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:28.462 00:17:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:28.462 00:17:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.462 00:17:15 -- common/autotest_common.sh@10 -- # set +x 00:08:28.462 ************************************ 00:08:28.462 START TEST filesystem_in_capsule_btrfs 00:08:28.462 ************************************ 00:08:28.462 00:17:15 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:28.462 00:17:15 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:28.462 00:17:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.462 00:17:15 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:28.462 00:17:15 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:28.462 00:17:15 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:28.462 00:17:15 -- common/autotest_common.sh@904 -- # local i=0 00:08:28.462 00:17:15 -- common/autotest_common.sh@905 -- # local force 00:08:28.462 00:17:15 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:28.462 00:17:15 -- common/autotest_common.sh@910 -- # force=-f 00:08:28.462 00:17:15 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:28.721 btrfs-progs v6.6.2 00:08:28.721 See https://btrfs.readthedocs.io for more information. 00:08:28.721 00:08:28.721 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:28.721 NOTE: several default settings have changed in version 5.15, please make sure 00:08:28.721 this does not affect your deployments: 00:08:28.721 - DUP for metadata (-m dup) 00:08:28.721 - enabled no-holes (-O no-holes) 00:08:28.721 - enabled free-space-tree (-R free-space-tree) 00:08:28.721 00:08:28.721 Label: (null) 00:08:28.721 UUID: 5b96ff41-0bbc-4a90-9a3b-ebba1c0fdc9f 00:08:28.721 Node size: 16384 00:08:28.721 Sector size: 4096 00:08:28.721 Filesystem size: 510.00MiB 00:08:28.721 Block group profiles: 00:08:28.721 Data: single 8.00MiB 00:08:28.721 Metadata: DUP 32.00MiB 00:08:28.721 System: DUP 8.00MiB 00:08:28.721 SSD detected: yes 00:08:28.721 Zoned device: no 00:08:28.721 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:28.721 Runtime features: free-space-tree 00:08:28.721 Checksum: crc32c 00:08:28.721 Number of devices: 1 00:08:28.721 Devices: 00:08:28.721 ID SIZE PATH 00:08:28.721 1 510.00MiB /dev/nvme0n1p1 00:08:28.721 00:08:28.721 00:17:15 -- common/autotest_common.sh@921 -- # return 0 00:08:28.721 00:17:15 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.721 00:17:15 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.721 00:17:15 -- target/filesystem.sh@25 -- # sync 00:08:28.721 00:17:15 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.721 00:17:15 -- target/filesystem.sh@27 -- # sync 00:08:28.721 00:17:15 -- target/filesystem.sh@29 -- # i=0 00:08:28.721 00:17:15 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.721 00:17:15 -- target/filesystem.sh@37 -- # kill -0 72648 00:08:28.721 00:17:15 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.721 00:17:15 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.721 00:17:15 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.721 00:17:15 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.721 ************************************ 00:08:28.721 END TEST filesystem_in_capsule_btrfs 00:08:28.721 ************************************ 00:08:28.721 00:08:28.721 real 0m0.236s 00:08:28.721 user 0m0.020s 00:08:28.721 sys 0m0.072s 00:08:28.721 00:17:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.721 00:17:15 -- common/autotest_common.sh@10 -- # set +x 00:08:28.721 00:17:15 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:28.721 00:17:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:28.721 00:17:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.721 00:17:15 -- common/autotest_common.sh@10 -- # set +x 00:08:28.721 ************************************ 00:08:28.721 START TEST filesystem_in_capsule_xfs 00:08:28.721 ************************************ 00:08:28.721 00:17:15 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:28.721 00:17:15 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:28.721 00:17:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.721 00:17:15 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:28.721 00:17:15 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:28.721 00:17:15 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:28.721 00:17:15 -- common/autotest_common.sh@904 -- # local i=0 00:08:28.721 00:17:15 -- common/autotest_common.sh@905 -- # local force 00:08:28.721 00:17:15 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:28.721 00:17:15 -- common/autotest_common.sh@910 -- # force=-f 00:08:28.721 00:17:15 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:28.980 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:28.980 = sectsz=512 attr=2, projid32bit=1 00:08:28.980 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:28.980 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:28.980 data = bsize=4096 blocks=130560, imaxpct=25 00:08:28.980 = sunit=0 swidth=0 blks 00:08:28.980 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:28.980 log =internal log bsize=4096 blocks=16384, version=2 00:08:28.980 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:28.980 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:29.547 Discarding blocks...Done. 00:08:29.547 00:17:16 -- common/autotest_common.sh@921 -- # return 0 00:08:29.547 00:17:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.508 00:17:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.508 00:17:18 -- target/filesystem.sh@25 -- # sync 00:08:31.508 00:17:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.508 00:17:18 -- target/filesystem.sh@27 -- # sync 00:08:31.508 00:17:18 -- target/filesystem.sh@29 -- # i=0 00:08:31.508 00:17:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.508 00:17:18 -- target/filesystem.sh@37 -- # kill -0 72648 00:08:31.508 00:17:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.508 00:17:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.508 00:17:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.508 00:17:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.508 00:08:31.508 real 0m2.657s 00:08:31.508 user 0m0.017s 00:08:31.508 sys 0m0.062s 00:08:31.508 00:17:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.508 00:17:18 -- common/autotest_common.sh@10 -- # set +x 00:08:31.508 ************************************ 00:08:31.508 END TEST filesystem_in_capsule_xfs 00:08:31.508 ************************************ 00:08:31.508 00:17:18 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:31.508 00:17:18 -- target/filesystem.sh@93 -- # sync 00:08:31.508 00:17:18 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:31.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.509 00:17:18 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:31.509 00:17:18 -- common/autotest_common.sh@1198 -- # local i=0 00:08:31.509 00:17:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:31.509 00:17:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.509 00:17:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:31.509 00:17:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:31.509 00:17:18 -- common/autotest_common.sh@1210 -- # return 0 00:08:31.509 00:17:18 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.509 00:17:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.509 00:17:18 -- common/autotest_common.sh@10 -- # set +x 00:08:31.509 00:17:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.509 00:17:18 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:31.509 00:17:18 -- target/filesystem.sh@101 -- # killprocess 72648 00:08:31.509 00:17:18 -- common/autotest_common.sh@926 -- # '[' -z 72648 ']' 00:08:31.509 00:17:18 -- common/autotest_common.sh@930 -- # kill -0 72648 00:08:31.509 00:17:18 -- common/autotest_common.sh@931 -- # uname 00:08:31.509 00:17:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:31.509 00:17:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72648 00:08:31.509 killing process with pid 72648 00:08:31.509 00:17:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:31.509 00:17:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:31.509 00:17:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72648' 00:08:31.509 00:17:18 -- common/autotest_common.sh@945 -- # kill 72648 00:08:31.509 00:17:18 -- common/autotest_common.sh@950 -- # wait 72648 00:08:32.075 00:17:19 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:32.075 00:08:32.075 real 0m8.674s 00:08:32.075 user 0m32.928s 00:08:32.075 sys 0m1.540s 00:08:32.075 00:17:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.075 ************************************ 00:08:32.075 END TEST nvmf_filesystem_in_capsule 00:08:32.075 00:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:32.075 ************************************ 00:08:32.075 00:17:19 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:32.075 00:17:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:32.075 00:17:19 -- nvmf/common.sh@116 -- # sync 00:08:32.075 00:17:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:32.075 00:17:19 -- nvmf/common.sh@119 -- # set +e 00:08:32.075 00:17:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:32.075 00:17:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:32.075 rmmod nvme_tcp 00:08:32.075 rmmod nvme_fabrics 00:08:32.075 rmmod nvme_keyring 00:08:32.075 00:17:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:32.075 00:17:19 -- nvmf/common.sh@123 -- # set -e 00:08:32.075 00:17:19 -- nvmf/common.sh@124 -- # return 0 00:08:32.075 00:17:19 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:32.075 00:17:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:32.075 00:17:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:32.075 00:17:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:32.075 00:17:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.075 00:17:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:32.075 00:17:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.075 00:17:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.075 00:17:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.075 00:17:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:32.075 00:08:32.075 real 0m19.223s 00:08:32.075 user 1m10.412s 00:08:32.075 sys 0m3.315s 00:08:32.075 00:17:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.075 00:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:32.075 ************************************ 00:08:32.075 END TEST nvmf_filesystem 00:08:32.075 ************************************ 00:08:32.334 00:17:19 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:32.334 00:17:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:32.334 00:17:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.334 00:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:32.334 ************************************ 00:08:32.334 START TEST nvmf_discovery 00:08:32.334 ************************************ 00:08:32.334 00:17:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:32.334 * Looking for test storage... 00:08:32.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:32.334 00:17:19 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:32.334 00:17:19 -- nvmf/common.sh@7 -- # uname -s 00:08:32.334 00:17:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.334 00:17:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.334 00:17:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.334 00:17:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.334 00:17:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.334 00:17:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.334 00:17:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.334 00:17:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.334 00:17:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.334 00:17:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.334 00:17:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:08:32.334 00:17:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:08:32.334 00:17:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.334 00:17:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.334 00:17:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:32.334 00:17:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.334 00:17:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.334 00:17:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.334 00:17:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.334 00:17:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.334 00:17:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.334 00:17:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.334 00:17:19 -- paths/export.sh@5 -- # export PATH 00:08:32.334 00:17:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.334 00:17:19 -- nvmf/common.sh@46 -- # : 0 00:08:32.334 00:17:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:32.334 00:17:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:32.334 00:17:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:32.334 00:17:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.334 00:17:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.334 00:17:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:32.334 00:17:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:32.334 00:17:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:32.334 00:17:19 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:32.334 00:17:19 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:32.334 00:17:19 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:32.334 00:17:19 -- target/discovery.sh@15 -- # hash nvme 00:08:32.334 00:17:19 -- target/discovery.sh@20 -- # nvmftestinit 00:08:32.334 00:17:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:32.334 00:17:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.334 00:17:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:32.334 00:17:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:32.334 00:17:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:32.334 00:17:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.334 00:17:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.334 00:17:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.334 00:17:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:32.334 00:17:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:32.334 00:17:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:32.334 00:17:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:32.334 00:17:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:32.334 00:17:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:32.334 00:17:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.334 00:17:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.334 00:17:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:32.334 00:17:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:32.334 00:17:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:32.334 00:17:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:32.334 00:17:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:32.334 00:17:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.334 00:17:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:32.334 00:17:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:32.334 00:17:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:32.334 00:17:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:32.334 00:17:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:32.334 00:17:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:32.334 Cannot find device "nvmf_tgt_br" 00:08:32.334 00:17:19 -- nvmf/common.sh@154 -- # true 00:08:32.334 00:17:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.334 Cannot find device "nvmf_tgt_br2" 00:08:32.335 00:17:19 -- nvmf/common.sh@155 -- # true 00:08:32.335 00:17:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:32.335 00:17:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:32.335 Cannot find device "nvmf_tgt_br" 00:08:32.335 00:17:19 -- nvmf/common.sh@157 -- # true 00:08:32.335 00:17:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:32.335 Cannot find device "nvmf_tgt_br2" 00:08:32.335 00:17:19 -- nvmf/common.sh@158 -- # true 00:08:32.335 00:17:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:32.335 00:17:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:32.593 00:17:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:32.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.594 00:17:19 -- nvmf/common.sh@161 -- # true 00:08:32.594 00:17:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:32.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.594 00:17:19 -- nvmf/common.sh@162 -- # true 00:08:32.594 00:17:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:32.594 00:17:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:32.594 00:17:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:32.594 00:17:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:32.594 00:17:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:32.594 00:17:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:32.594 00:17:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:32.594 00:17:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:32.594 00:17:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:32.594 00:17:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:32.594 00:17:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:32.594 00:17:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:32.594 00:17:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:32.594 00:17:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:32.594 00:17:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:32.594 00:17:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:32.594 00:17:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:32.594 00:17:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:32.594 00:17:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:32.594 00:17:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:32.594 00:17:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:32.594 00:17:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:32.594 00:17:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:32.594 00:17:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:32.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:08:32.594 00:08:32.594 --- 10.0.0.2 ping statistics --- 00:08:32.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.594 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:32.594 00:17:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:32.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:32.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:32.594 00:08:32.594 --- 10.0.0.3 ping statistics --- 00:08:32.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.594 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:32.594 00:17:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:32.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:08:32.594 00:08:32.594 --- 10.0.0.1 ping statistics --- 00:08:32.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.594 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:08:32.594 00:17:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.594 00:17:19 -- nvmf/common.sh@421 -- # return 0 00:08:32.594 00:17:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:32.594 00:17:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.594 00:17:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:32.594 00:17:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:32.594 00:17:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.594 00:17:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:32.594 00:17:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:32.594 00:17:19 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:32.594 00:17:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:32.594 00:17:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:32.594 00:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:32.594 00:17:19 -- nvmf/common.sh@469 -- # nvmfpid=73098 00:08:32.594 00:17:19 -- nvmf/common.sh@470 -- # waitforlisten 73098 00:08:32.594 00:17:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.594 00:17:19 -- common/autotest_common.sh@819 -- # '[' -z 73098 ']' 00:08:32.594 00:17:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.594 00:17:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:32.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.594 00:17:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.594 00:17:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:32.594 00:17:19 -- common/autotest_common.sh@10 -- # set +x 00:08:32.853 [2024-07-13 00:17:19.851134] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:32.853 [2024-07-13 00:17:19.851195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.853 [2024-07-13 00:17:19.994817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.112 [2024-07-13 00:17:20.112333] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:33.112 [2024-07-13 00:17:20.112589] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.112 [2024-07-13 00:17:20.112648] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.112 [2024-07-13 00:17:20.112682] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.112 [2024-07-13 00:17:20.112817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.112 [2024-07-13 00:17:20.112915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.112 [2024-07-13 00:17:20.113043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.112 [2024-07-13 00:17:20.113063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.679 00:17:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:33.679 00:17:20 -- common/autotest_common.sh@852 -- # return 0 00:08:33.679 00:17:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:33.679 00:17:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:33.679 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.938 00:17:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.938 00:17:20 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.938 00:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.938 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.938 [2024-07-13 00:17:20.934416] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.938 00:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.938 00:17:20 -- target/discovery.sh@26 -- # seq 1 4 00:08:33.938 00:17:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:33.938 00:17:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:33.938 00:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.938 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.938 Null1 00:08:33.938 00:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.938 00:17:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:33.938 00:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.938 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.938 00:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.938 00:17:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:33.939 00:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.939 00:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 [2024-07-13 00:17:20.985365] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.939 00:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:33.939 00:17:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:33.939 00:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 Null2 00:08:33.939 00:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:33.939 00:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:33.939 00:17:21 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 Null3 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:33.939 00:17:21 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 Null4 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.939 00:17:21 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -a 10.0.0.2 -s 4420 00:08:33.939 00:08:33.939 Discovery Log Number of Records 6, Generation counter 6 00:08:33.939 =====Discovery Log Entry 0====== 00:08:33.939 trtype: tcp 00:08:33.939 adrfam: ipv4 00:08:33.939 subtype: current discovery subsystem 00:08:33.939 treq: not required 00:08:33.939 portid: 0 00:08:33.939 trsvcid: 4420 00:08:33.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:33.939 traddr: 10.0.0.2 00:08:33.939 eflags: explicit discovery connections, duplicate discovery information 00:08:33.939 sectype: none 00:08:33.939 =====Discovery Log Entry 1====== 00:08:33.939 trtype: tcp 00:08:33.939 adrfam: ipv4 00:08:33.939 subtype: nvme subsystem 00:08:33.939 treq: not required 00:08:33.939 portid: 0 00:08:33.939 trsvcid: 4420 00:08:33.939 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:33.939 traddr: 10.0.0.2 00:08:33.939 eflags: none 00:08:33.939 sectype: none 00:08:33.939 =====Discovery Log Entry 2====== 00:08:33.939 trtype: tcp 00:08:33.939 adrfam: ipv4 00:08:33.939 subtype: nvme subsystem 00:08:33.939 treq: not required 00:08:33.939 portid: 0 00:08:33.939 trsvcid: 4420 00:08:33.939 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:33.939 traddr: 10.0.0.2 00:08:33.939 eflags: none 00:08:33.939 sectype: none 00:08:33.939 =====Discovery Log Entry 3====== 00:08:33.939 trtype: tcp 00:08:33.939 adrfam: ipv4 00:08:33.939 subtype: nvme subsystem 00:08:33.939 treq: not required 00:08:33.939 portid: 0 00:08:33.939 trsvcid: 4420 00:08:33.939 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:33.939 traddr: 10.0.0.2 00:08:33.939 eflags: none 00:08:33.939 sectype: none 00:08:33.939 =====Discovery Log Entry 4====== 00:08:33.939 trtype: tcp 00:08:33.939 adrfam: ipv4 00:08:33.939 subtype: nvme subsystem 00:08:33.939 treq: not required 00:08:33.939 portid: 0 00:08:33.939 trsvcid: 4420 00:08:33.939 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:33.939 traddr: 10.0.0.2 00:08:33.939 eflags: none 00:08:33.939 sectype: none 00:08:33.939 =====Discovery Log Entry 5====== 00:08:33.939 trtype: tcp 00:08:33.939 adrfam: ipv4 00:08:33.939 subtype: discovery subsystem referral 00:08:33.939 treq: not required 00:08:33.939 portid: 0 00:08:33.939 trsvcid: 4430 00:08:33.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:33.939 traddr: 10.0.0.2 00:08:33.939 eflags: none 00:08:33.939 sectype: none 00:08:33.939 Perform nvmf subsystem discovery via RPC 00:08:33.939 00:17:21 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:33.939 00:17:21 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:33.939 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.939 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.198 [2024-07-13 00:17:21.169314] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:34.198 [ 00:08:34.198 { 00:08:34.198 "allow_any_host": true, 00:08:34.198 "hosts": [], 00:08:34.198 "listen_addresses": [ 00:08:34.198 { 00:08:34.198 "adrfam": "IPv4", 00:08:34.198 "traddr": "10.0.0.2", 00:08:34.198 "transport": "TCP", 00:08:34.198 "trsvcid": "4420", 00:08:34.198 "trtype": "TCP" 00:08:34.198 } 00:08:34.198 ], 00:08:34.198 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:34.198 "subtype": "Discovery" 00:08:34.198 }, 00:08:34.198 { 00:08:34.198 "allow_any_host": true, 00:08:34.198 "hosts": [], 00:08:34.198 "listen_addresses": [ 00:08:34.198 { 00:08:34.198 "adrfam": "IPv4", 00:08:34.198 "traddr": "10.0.0.2", 00:08:34.198 "transport": "TCP", 00:08:34.198 "trsvcid": "4420", 00:08:34.198 "trtype": "TCP" 00:08:34.198 } 00:08:34.198 ], 00:08:34.198 "max_cntlid": 65519, 00:08:34.198 "max_namespaces": 32, 00:08:34.198 "min_cntlid": 1, 00:08:34.198 "model_number": "SPDK bdev Controller", 00:08:34.198 "namespaces": [ 00:08:34.198 { 00:08:34.198 "bdev_name": "Null1", 00:08:34.198 "name": "Null1", 00:08:34.198 "nguid": "4381663790364A58B14E1748D2BCC22B", 00:08:34.198 "nsid": 1, 00:08:34.198 "uuid": "43816637-9036-4a58-b14e-1748d2bcc22b" 00:08:34.198 } 00:08:34.198 ], 00:08:34.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:34.198 "serial_number": "SPDK00000000000001", 00:08:34.198 "subtype": "NVMe" 00:08:34.198 }, 00:08:34.198 { 00:08:34.198 "allow_any_host": true, 00:08:34.198 "hosts": [], 00:08:34.198 "listen_addresses": [ 00:08:34.198 { 00:08:34.198 "adrfam": "IPv4", 00:08:34.198 "traddr": "10.0.0.2", 00:08:34.198 "transport": "TCP", 00:08:34.198 "trsvcid": "4420", 00:08:34.198 "trtype": "TCP" 00:08:34.198 } 00:08:34.198 ], 00:08:34.198 "max_cntlid": 65519, 00:08:34.198 "max_namespaces": 32, 00:08:34.198 "min_cntlid": 1, 00:08:34.198 "model_number": "SPDK bdev Controller", 00:08:34.198 "namespaces": [ 00:08:34.198 { 00:08:34.198 "bdev_name": "Null2", 00:08:34.198 "name": "Null2", 00:08:34.198 "nguid": "CA32A8C2D58F422FA28ED77820FC1521", 00:08:34.198 "nsid": 1, 00:08:34.198 "uuid": "ca32a8c2-d58f-422f-a28e-d77820fc1521" 00:08:34.198 } 00:08:34.198 ], 00:08:34.198 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:34.198 "serial_number": "SPDK00000000000002", 00:08:34.198 "subtype": "NVMe" 00:08:34.198 }, 00:08:34.198 { 00:08:34.198 "allow_any_host": true, 00:08:34.198 "hosts": [], 00:08:34.198 "listen_addresses": [ 00:08:34.198 { 00:08:34.198 "adrfam": "IPv4", 00:08:34.198 "traddr": "10.0.0.2", 00:08:34.198 "transport": "TCP", 00:08:34.198 "trsvcid": "4420", 00:08:34.198 "trtype": "TCP" 00:08:34.198 } 00:08:34.198 ], 00:08:34.198 "max_cntlid": 65519, 00:08:34.198 "max_namespaces": 32, 00:08:34.198 "min_cntlid": 1, 00:08:34.198 "model_number": "SPDK bdev Controller", 00:08:34.198 "namespaces": [ 00:08:34.198 { 00:08:34.198 "bdev_name": "Null3", 00:08:34.198 "name": "Null3", 00:08:34.198 "nguid": "0F458DB3ECBC48FDBBB050883AB54D06", 00:08:34.198 "nsid": 1, 00:08:34.198 "uuid": "0f458db3-ecbc-48fd-bbb0-50883ab54d06" 00:08:34.198 } 00:08:34.198 ], 00:08:34.198 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:34.198 "serial_number": "SPDK00000000000003", 00:08:34.198 "subtype": "NVMe" 00:08:34.198 }, 00:08:34.198 { 00:08:34.198 "allow_any_host": true, 00:08:34.198 "hosts": [], 00:08:34.198 "listen_addresses": [ 00:08:34.198 { 00:08:34.198 "adrfam": "IPv4", 00:08:34.198 "traddr": "10.0.0.2", 00:08:34.198 "transport": "TCP", 00:08:34.198 "trsvcid": "4420", 00:08:34.198 "trtype": "TCP" 00:08:34.198 } 00:08:34.198 ], 00:08:34.198 "max_cntlid": 65519, 00:08:34.198 "max_namespaces": 32, 00:08:34.198 "min_cntlid": 1, 00:08:34.198 "model_number": "SPDK bdev Controller", 00:08:34.198 "namespaces": [ 00:08:34.198 { 00:08:34.198 "bdev_name": "Null4", 00:08:34.198 "name": "Null4", 00:08:34.198 "nguid": "8D3F4C84A1A94130AAF5380A0E588D7C", 00:08:34.198 "nsid": 1, 00:08:34.198 "uuid": "8d3f4c84-a1a9-4130-aaf5-380a0e588d7c" 00:08:34.198 } 00:08:34.198 ], 00:08:34.198 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:34.198 "serial_number": "SPDK00000000000004", 00:08:34.198 "subtype": "NVMe" 00:08:34.198 } 00:08:34.198 ] 00:08:34.198 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.198 00:17:21 -- target/discovery.sh@42 -- # seq 1 4 00:08:34.198 00:17:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.198 00:17:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.198 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.198 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.198 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.198 00:17:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:34.198 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.198 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.198 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.198 00:17:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.198 00:17:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:34.198 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.198 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.198 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.198 00:17:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:34.198 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.198 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.198 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.198 00:17:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.198 00:17:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:34.198 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.198 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.198 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.198 00:17:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:34.198 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.198 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.198 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.198 00:17:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.198 00:17:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:34.198 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.198 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.198 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.198 00:17:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:34.198 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.198 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.198 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.198 00:17:21 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:34.198 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.198 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.198 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.198 00:17:21 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:34.198 00:17:21 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:34.198 00:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.199 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.199 00:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.199 00:17:21 -- target/discovery.sh@49 -- # check_bdevs= 00:08:34.199 00:17:21 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:34.199 00:17:21 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:34.199 00:17:21 -- target/discovery.sh@57 -- # nvmftestfini 00:08:34.199 00:17:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:34.199 00:17:21 -- nvmf/common.sh@116 -- # sync 00:08:34.199 00:17:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:34.199 00:17:21 -- nvmf/common.sh@119 -- # set +e 00:08:34.199 00:17:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:34.199 00:17:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:34.199 rmmod nvme_tcp 00:08:34.199 rmmod nvme_fabrics 00:08:34.199 rmmod nvme_keyring 00:08:34.199 00:17:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:34.199 00:17:21 -- nvmf/common.sh@123 -- # set -e 00:08:34.199 00:17:21 -- nvmf/common.sh@124 -- # return 0 00:08:34.199 00:17:21 -- nvmf/common.sh@477 -- # '[' -n 73098 ']' 00:08:34.199 00:17:21 -- nvmf/common.sh@478 -- # killprocess 73098 00:08:34.199 00:17:21 -- common/autotest_common.sh@926 -- # '[' -z 73098 ']' 00:08:34.199 00:17:21 -- common/autotest_common.sh@930 -- # kill -0 73098 00:08:34.199 00:17:21 -- common/autotest_common.sh@931 -- # uname 00:08:34.199 00:17:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:34.199 00:17:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73098 00:08:34.199 killing process with pid 73098 00:08:34.199 00:17:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:34.199 00:17:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:34.199 00:17:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73098' 00:08:34.199 00:17:21 -- common/autotest_common.sh@945 -- # kill 73098 00:08:34.199 [2024-07-13 00:17:21.406306] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:34.199 00:17:21 -- common/autotest_common.sh@950 -- # wait 73098 00:08:34.478 00:17:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:34.478 00:17:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:34.478 00:17:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:34.478 00:17:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.478 00:17:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:34.478 00:17:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.478 00:17:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.478 00:17:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.478 00:17:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:34.478 00:08:34.478 real 0m2.280s 00:08:34.478 user 0m6.368s 00:08:34.478 sys 0m0.587s 00:08:34.478 00:17:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.478 ************************************ 00:08:34.478 END TEST nvmf_discovery 00:08:34.478 ************************************ 00:08:34.478 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.478 00:17:21 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:34.478 00:17:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:34.478 00:17:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.478 00:17:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.478 ************************************ 00:08:34.478 START TEST nvmf_referrals 00:08:34.478 ************************************ 00:08:34.478 00:17:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:34.742 * Looking for test storage... 00:08:34.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:34.742 00:17:21 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.742 00:17:21 -- nvmf/common.sh@7 -- # uname -s 00:08:34.742 00:17:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.742 00:17:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.742 00:17:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.742 00:17:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.742 00:17:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.742 00:17:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.742 00:17:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.742 00:17:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.742 00:17:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.742 00:17:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.742 00:17:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:08:34.742 00:17:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:08:34.742 00:17:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.742 00:17:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.742 00:17:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.742 00:17:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.742 00:17:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.742 00:17:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.742 00:17:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.742 00:17:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.742 00:17:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.742 00:17:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.742 00:17:21 -- paths/export.sh@5 -- # export PATH 00:08:34.742 00:17:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.742 00:17:21 -- nvmf/common.sh@46 -- # : 0 00:08:34.742 00:17:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:34.742 00:17:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:34.742 00:17:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:34.742 00:17:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.742 00:17:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.742 00:17:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:34.742 00:17:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:34.742 00:17:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:34.742 00:17:21 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:34.742 00:17:21 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:34.742 00:17:21 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:34.742 00:17:21 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:34.742 00:17:21 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:34.742 00:17:21 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:34.742 00:17:21 -- target/referrals.sh@37 -- # nvmftestinit 00:08:34.742 00:17:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:34.742 00:17:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.742 00:17:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:34.742 00:17:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:34.742 00:17:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:34.742 00:17:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.742 00:17:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.742 00:17:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.742 00:17:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:34.742 00:17:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:34.742 00:17:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:34.742 00:17:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:34.742 00:17:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:34.742 00:17:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:34.742 00:17:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.742 00:17:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.742 00:17:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:34.742 00:17:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:34.742 00:17:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:34.742 00:17:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:34.742 00:17:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:34.742 00:17:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.742 00:17:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:34.742 00:17:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:34.742 00:17:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:34.742 00:17:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:34.742 00:17:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:34.742 00:17:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:34.742 Cannot find device "nvmf_tgt_br" 00:08:34.742 00:17:21 -- nvmf/common.sh@154 -- # true 00:08:34.742 00:17:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.742 Cannot find device "nvmf_tgt_br2" 00:08:34.742 00:17:21 -- nvmf/common.sh@155 -- # true 00:08:34.742 00:17:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:34.742 00:17:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:34.742 Cannot find device "nvmf_tgt_br" 00:08:34.742 00:17:21 -- nvmf/common.sh@157 -- # true 00:08:34.742 00:17:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:34.742 Cannot find device "nvmf_tgt_br2" 00:08:34.742 00:17:21 -- nvmf/common.sh@158 -- # true 00:08:34.742 00:17:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:34.742 00:17:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:34.742 00:17:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.742 00:17:21 -- nvmf/common.sh@161 -- # true 00:08:34.742 00:17:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.742 00:17:21 -- nvmf/common.sh@162 -- # true 00:08:34.742 00:17:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:34.742 00:17:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:34.742 00:17:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:34.742 00:17:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:34.742 00:17:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:34.742 00:17:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:34.742 00:17:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:34.742 00:17:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:34.742 00:17:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:34.742 00:17:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:34.742 00:17:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:34.742 00:17:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:34.742 00:17:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:34.742 00:17:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:34.742 00:17:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:34.742 00:17:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:34.742 00:17:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:34.742 00:17:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:35.000 00:17:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.000 00:17:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.000 00:17:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:35.000 00:17:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:35.000 00:17:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:35.000 00:17:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:35.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:08:35.000 00:08:35.000 --- 10.0.0.2 ping statistics --- 00:08:35.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.000 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:35.000 00:17:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:35.000 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:35.000 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:08:35.000 00:08:35.000 --- 10.0.0.3 ping statistics --- 00:08:35.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.000 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:35.000 00:17:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:08:35.000 00:08:35.000 --- 10.0.0.1 ping statistics --- 00:08:35.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.000 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:35.000 00:17:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.000 00:17:22 -- nvmf/common.sh@421 -- # return 0 00:08:35.000 00:17:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:35.000 00:17:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.000 00:17:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:35.000 00:17:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:35.000 00:17:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.000 00:17:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:35.000 00:17:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:35.000 00:17:22 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:35.000 00:17:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:35.000 00:17:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:35.000 00:17:22 -- common/autotest_common.sh@10 -- # set +x 00:08:35.000 00:17:22 -- nvmf/common.sh@469 -- # nvmfpid=73321 00:08:35.000 00:17:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:35.000 00:17:22 -- nvmf/common.sh@470 -- # waitforlisten 73321 00:08:35.000 00:17:22 -- common/autotest_common.sh@819 -- # '[' -z 73321 ']' 00:08:35.000 00:17:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.000 00:17:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:35.000 00:17:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.000 00:17:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:35.000 00:17:22 -- common/autotest_common.sh@10 -- # set +x 00:08:35.000 [2024-07-13 00:17:22.116610] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:35.000 [2024-07-13 00:17:22.116719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.258 [2024-07-13 00:17:22.261615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.258 [2024-07-13 00:17:22.339395] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:35.258 [2024-07-13 00:17:22.339541] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.258 [2024-07-13 00:17:22.339554] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.258 [2024-07-13 00:17:22.339562] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.258 [2024-07-13 00:17:22.339753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.258 [2024-07-13 00:17:22.340130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.258 [2024-07-13 00:17:22.340265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.258 [2024-07-13 00:17:22.340273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.824 00:17:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:35.824 00:17:23 -- common/autotest_common.sh@852 -- # return 0 00:08:35.824 00:17:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:35.824 00:17:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:35.824 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.083 00:17:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.083 00:17:23 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.083 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.083 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.083 [2024-07-13 00:17:23.067566] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.083 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.083 00:17:23 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:36.083 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.083 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.083 [2024-07-13 00:17:23.091153] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:36.083 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.083 00:17:23 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:36.083 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.083 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.083 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.083 00:17:23 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:36.083 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.083 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.083 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.083 00:17:23 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:36.083 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.083 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.083 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.083 00:17:23 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.083 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.083 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.083 00:17:23 -- target/referrals.sh@48 -- # jq length 00:08:36.083 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.083 00:17:23 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:36.083 00:17:23 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:36.083 00:17:23 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.083 00:17:23 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.083 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.083 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.083 00:17:23 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.083 00:17:23 -- target/referrals.sh@21 -- # sort 00:08:36.083 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.083 00:17:23 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:36.083 00:17:23 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:36.083 00:17:23 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:36.083 00:17:23 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.083 00:17:23 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.083 00:17:23 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.083 00:17:23 -- target/referrals.sh@26 -- # sort 00:08:36.083 00:17:23 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.342 00:17:23 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:36.342 00:17:23 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:36.342 00:17:23 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:36.342 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.342 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.342 00:17:23 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:36.342 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.342 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.342 00:17:23 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:36.342 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.342 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.342 00:17:23 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.342 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.342 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 00:17:23 -- target/referrals.sh@56 -- # jq length 00:08:36.342 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.342 00:17:23 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:36.342 00:17:23 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:36.342 00:17:23 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.342 00:17:23 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.342 00:17:23 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.342 00:17:23 -- target/referrals.sh@26 -- # sort 00:08:36.342 00:17:23 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.342 00:17:23 -- target/referrals.sh@26 -- # echo 00:08:36.342 00:17:23 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:36.342 00:17:23 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:36.342 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.342 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.342 00:17:23 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:36.342 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.342 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.342 00:17:23 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:36.342 00:17:23 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.342 00:17:23 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.342 00:17:23 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.342 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.342 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 00:17:23 -- target/referrals.sh@21 -- # sort 00:08:36.342 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.601 00:17:23 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:36.601 00:17:23 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:36.601 00:17:23 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:36.601 00:17:23 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.601 00:17:23 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.601 00:17:23 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.601 00:17:23 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.601 00:17:23 -- target/referrals.sh@26 -- # sort 00:08:36.601 00:17:23 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:36.601 00:17:23 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:36.601 00:17:23 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:36.601 00:17:23 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:36.601 00:17:23 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:36.601 00:17:23 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.601 00:17:23 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:36.602 00:17:23 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:36.602 00:17:23 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:36.602 00:17:23 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:36.602 00:17:23 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.602 00:17:23 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:36.602 00:17:23 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:36.602 00:17:23 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:36.602 00:17:23 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:36.602 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.602 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.602 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.602 00:17:23 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:36.602 00:17:23 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.602 00:17:23 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.602 00:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.602 00:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:36.602 00:17:23 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.602 00:17:23 -- target/referrals.sh@21 -- # sort 00:08:36.602 00:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.860 00:17:23 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:36.860 00:17:23 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:36.860 00:17:23 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:36.860 00:17:23 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.860 00:17:23 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.860 00:17:23 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.860 00:17:23 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.860 00:17:23 -- target/referrals.sh@26 -- # sort 00:08:36.860 00:17:23 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:36.860 00:17:23 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:36.860 00:17:23 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:36.860 00:17:23 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:36.860 00:17:23 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:36.860 00:17:23 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.860 00:17:23 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:36.860 00:17:24 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:36.860 00:17:24 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:36.860 00:17:24 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:36.860 00:17:24 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.860 00:17:24 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:36.860 00:17:24 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:36.860 00:17:24 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:36.860 00:17:24 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:36.860 00:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.860 00:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:36.860 00:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.860 00:17:24 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.860 00:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.860 00:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:36.860 00:17:24 -- target/referrals.sh@82 -- # jq length 00:08:37.118 00:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.118 00:17:24 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:37.118 00:17:24 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:37.118 00:17:24 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.118 00:17:24 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.118 00:17:24 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.118 00:17:24 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.118 00:17:24 -- target/referrals.sh@26 -- # sort 00:08:37.118 00:17:24 -- target/referrals.sh@26 -- # echo 00:08:37.118 00:17:24 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:37.118 00:17:24 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:37.118 00:17:24 -- target/referrals.sh@86 -- # nvmftestfini 00:08:37.118 00:17:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:37.118 00:17:24 -- nvmf/common.sh@116 -- # sync 00:08:37.118 00:17:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:37.118 00:17:24 -- nvmf/common.sh@119 -- # set +e 00:08:37.118 00:17:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:37.118 00:17:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:37.118 rmmod nvme_tcp 00:08:37.118 rmmod nvme_fabrics 00:08:37.118 rmmod nvme_keyring 00:08:37.118 00:17:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:37.118 00:17:24 -- nvmf/common.sh@123 -- # set -e 00:08:37.118 00:17:24 -- nvmf/common.sh@124 -- # return 0 00:08:37.118 00:17:24 -- nvmf/common.sh@477 -- # '[' -n 73321 ']' 00:08:37.118 00:17:24 -- nvmf/common.sh@478 -- # killprocess 73321 00:08:37.118 00:17:24 -- common/autotest_common.sh@926 -- # '[' -z 73321 ']' 00:08:37.118 00:17:24 -- common/autotest_common.sh@930 -- # kill -0 73321 00:08:37.118 00:17:24 -- common/autotest_common.sh@931 -- # uname 00:08:37.118 00:17:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:37.118 00:17:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73321 00:08:37.118 00:17:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:37.118 killing process with pid 73321 00:08:37.118 00:17:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:37.118 00:17:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73321' 00:08:37.118 00:17:24 -- common/autotest_common.sh@945 -- # kill 73321 00:08:37.118 00:17:24 -- common/autotest_common.sh@950 -- # wait 73321 00:08:37.376 00:17:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:37.376 00:17:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:37.376 00:17:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:37.376 00:17:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.376 00:17:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:37.376 00:17:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.376 00:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.376 00:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.376 00:17:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:37.376 00:08:37.376 real 0m2.922s 00:08:37.376 user 0m9.874s 00:08:37.376 sys 0m0.768s 00:08:37.376 00:17:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.376 00:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:37.376 ************************************ 00:08:37.376 END TEST nvmf_referrals 00:08:37.376 ************************************ 00:08:37.643 00:17:24 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:37.643 00:17:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:37.643 00:17:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.643 00:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:37.643 ************************************ 00:08:37.643 START TEST nvmf_connect_disconnect 00:08:37.643 ************************************ 00:08:37.643 00:17:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:37.643 * Looking for test storage... 00:08:37.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:37.643 00:17:24 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:37.643 00:17:24 -- nvmf/common.sh@7 -- # uname -s 00:08:37.643 00:17:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.643 00:17:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.643 00:17:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.643 00:17:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.643 00:17:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.643 00:17:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.643 00:17:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.643 00:17:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.643 00:17:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.643 00:17:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.643 00:17:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:08:37.643 00:17:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:08:37.643 00:17:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.643 00:17:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.643 00:17:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:37.643 00:17:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.643 00:17:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.643 00:17:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.643 00:17:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.643 00:17:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.643 00:17:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.643 00:17:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.643 00:17:24 -- paths/export.sh@5 -- # export PATH 00:08:37.643 00:17:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.643 00:17:24 -- nvmf/common.sh@46 -- # : 0 00:08:37.643 00:17:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:37.643 00:17:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:37.643 00:17:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:37.643 00:17:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.643 00:17:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.643 00:17:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:37.643 00:17:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:37.643 00:17:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:37.643 00:17:24 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.643 00:17:24 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.643 00:17:24 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:37.643 00:17:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:37.643 00:17:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.643 00:17:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:37.643 00:17:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:37.643 00:17:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:37.643 00:17:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.643 00:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.643 00:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.643 00:17:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:37.643 00:17:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:37.643 00:17:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:37.643 00:17:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:37.643 00:17:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:37.643 00:17:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:37.643 00:17:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.643 00:17:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.643 00:17:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:37.643 00:17:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:37.643 00:17:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:37.643 00:17:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:37.643 00:17:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:37.643 00:17:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.643 00:17:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:37.643 00:17:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:37.643 00:17:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:37.643 00:17:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:37.643 00:17:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:37.643 00:17:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:37.643 Cannot find device "nvmf_tgt_br" 00:08:37.643 00:17:24 -- nvmf/common.sh@154 -- # true 00:08:37.643 00:17:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:37.643 Cannot find device "nvmf_tgt_br2" 00:08:37.643 00:17:24 -- nvmf/common.sh@155 -- # true 00:08:37.643 00:17:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:37.643 00:17:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:37.643 Cannot find device "nvmf_tgt_br" 00:08:37.643 00:17:24 -- nvmf/common.sh@157 -- # true 00:08:37.643 00:17:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:37.643 Cannot find device "nvmf_tgt_br2" 00:08:37.643 00:17:24 -- nvmf/common.sh@158 -- # true 00:08:37.643 00:17:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:37.643 00:17:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:37.917 00:17:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:37.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.917 00:17:24 -- nvmf/common.sh@161 -- # true 00:08:37.917 00:17:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:37.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.917 00:17:24 -- nvmf/common.sh@162 -- # true 00:08:37.917 00:17:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:37.917 00:17:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:37.917 00:17:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:37.917 00:17:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:37.917 00:17:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:37.917 00:17:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:37.917 00:17:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:37.917 00:17:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:37.917 00:17:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:37.917 00:17:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:37.917 00:17:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:37.917 00:17:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:37.917 00:17:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:37.917 00:17:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:37.917 00:17:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:37.917 00:17:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:37.917 00:17:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:37.917 00:17:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:37.917 00:17:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:37.917 00:17:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:37.917 00:17:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:37.917 00:17:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:37.917 00:17:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:37.917 00:17:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:37.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:08:37.917 00:08:37.917 --- 10.0.0.2 ping statistics --- 00:08:37.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.917 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:37.917 00:17:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:37.917 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:37.917 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:08:37.917 00:08:37.917 --- 10.0.0.3 ping statistics --- 00:08:37.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.917 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:37.917 00:17:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:37.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:08:37.917 00:08:37.917 --- 10.0.0.1 ping statistics --- 00:08:37.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.917 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:37.917 00:17:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.917 00:17:25 -- nvmf/common.sh@421 -- # return 0 00:08:37.917 00:17:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:37.917 00:17:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.917 00:17:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:37.917 00:17:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:37.917 00:17:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.917 00:17:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:37.917 00:17:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:37.917 00:17:25 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:37.917 00:17:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:37.917 00:17:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:37.917 00:17:25 -- common/autotest_common.sh@10 -- # set +x 00:08:37.917 00:17:25 -- nvmf/common.sh@469 -- # nvmfpid=73622 00:08:37.917 00:17:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.917 00:17:25 -- nvmf/common.sh@470 -- # waitforlisten 73622 00:08:37.917 00:17:25 -- common/autotest_common.sh@819 -- # '[' -z 73622 ']' 00:08:37.917 00:17:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.917 00:17:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:37.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.917 00:17:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.917 00:17:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:37.917 00:17:25 -- common/autotest_common.sh@10 -- # set +x 00:08:38.174 [2024-07-13 00:17:25.156458] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:38.174 [2024-07-13 00:17:25.156544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.174 [2024-07-13 00:17:25.293171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.174 [2024-07-13 00:17:25.375345] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.174 [2024-07-13 00:17:25.375507] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.174 [2024-07-13 00:17:25.375521] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.174 [2024-07-13 00:17:25.375529] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.174 [2024-07-13 00:17:25.375609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.174 [2024-07-13 00:17:25.375750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.174 [2024-07-13 00:17:25.376057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.174 [2024-07-13 00:17:25.376061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.107 00:17:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:39.107 00:17:26 -- common/autotest_common.sh@852 -- # return 0 00:08:39.107 00:17:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:39.107 00:17:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:39.107 00:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.107 00:17:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.107 00:17:26 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:39.107 00:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.107 00:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.107 [2024-07-13 00:17:26.207963] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.107 00:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.107 00:17:26 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:39.107 00:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.107 00:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.107 00:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.107 00:17:26 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:39.107 00:17:26 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:39.107 00:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.107 00:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.107 00:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.107 00:17:26 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:39.107 00:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.107 00:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.107 00:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.107 00:17:26 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.107 00:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.107 00:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:39.107 [2024-07-13 00:17:26.277542] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.107 00:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.107 00:17:26 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:39.107 00:17:26 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:39.107 00:17:26 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:39.107 00:17:26 -- target/connect_disconnect.sh@34 -- # set +x 00:08:41.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.070 00:21:10 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:24.070 00:21:10 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:24.070 00:21:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:24.070 00:21:10 -- nvmf/common.sh@116 -- # sync 00:12:24.070 00:21:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:24.070 00:21:10 -- nvmf/common.sh@119 -- # set +e 00:12:24.070 00:21:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:24.070 00:21:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:24.070 rmmod nvme_tcp 00:12:24.070 rmmod nvme_fabrics 00:12:24.070 rmmod nvme_keyring 00:12:24.070 00:21:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:24.070 00:21:10 -- nvmf/common.sh@123 -- # set -e 00:12:24.070 00:21:10 -- nvmf/common.sh@124 -- # return 0 00:12:24.070 00:21:10 -- nvmf/common.sh@477 -- # '[' -n 73622 ']' 00:12:24.070 00:21:10 -- nvmf/common.sh@478 -- # killprocess 73622 00:12:24.070 00:21:10 -- common/autotest_common.sh@926 -- # '[' -z 73622 ']' 00:12:24.070 00:21:10 -- common/autotest_common.sh@930 -- # kill -0 73622 00:12:24.070 00:21:10 -- common/autotest_common.sh@931 -- # uname 00:12:24.070 00:21:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:24.070 00:21:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73622 00:12:24.070 killing process with pid 73622 00:12:24.070 00:21:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:24.070 00:21:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:24.070 00:21:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73622' 00:12:24.070 00:21:10 -- common/autotest_common.sh@945 -- # kill 73622 00:12:24.070 00:21:10 -- common/autotest_common.sh@950 -- # wait 73622 00:12:24.070 00:21:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:24.070 00:21:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:24.071 00:21:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:24.071 00:21:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.071 00:21:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:24.071 00:21:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.071 00:21:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.071 00:21:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.071 00:21:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:24.071 00:12:24.071 real 3m46.554s 00:12:24.071 user 14m47.857s 00:12:24.071 sys 0m19.692s 00:12:24.071 ************************************ 00:12:24.071 END TEST nvmf_connect_disconnect 00:12:24.071 ************************************ 00:12:24.071 00:21:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.071 00:21:11 -- common/autotest_common.sh@10 -- # set +x 00:12:24.071 00:21:11 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:24.071 00:21:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:24.071 00:21:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:24.071 00:21:11 -- common/autotest_common.sh@10 -- # set +x 00:12:24.071 ************************************ 00:12:24.071 START TEST nvmf_multitarget 00:12:24.071 ************************************ 00:12:24.071 00:21:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:24.329 * Looking for test storage... 00:12:24.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:24.329 00:21:11 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:24.329 00:21:11 -- nvmf/common.sh@7 -- # uname -s 00:12:24.329 00:21:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.329 00:21:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.329 00:21:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.329 00:21:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.329 00:21:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.329 00:21:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.329 00:21:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.329 00:21:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.329 00:21:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.329 00:21:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.329 00:21:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:12:24.329 00:21:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:12:24.329 00:21:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.329 00:21:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.329 00:21:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:24.329 00:21:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:24.329 00:21:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.329 00:21:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.329 00:21:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.329 00:21:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.329 00:21:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.329 00:21:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.329 00:21:11 -- paths/export.sh@5 -- # export PATH 00:12:24.329 00:21:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.329 00:21:11 -- nvmf/common.sh@46 -- # : 0 00:12:24.329 00:21:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:24.330 00:21:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:24.330 00:21:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:24.330 00:21:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.330 00:21:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.330 00:21:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:24.330 00:21:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:24.330 00:21:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:24.330 00:21:11 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:24.330 00:21:11 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:24.330 00:21:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:24.330 00:21:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.330 00:21:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:24.330 00:21:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:24.330 00:21:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:24.330 00:21:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.330 00:21:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.330 00:21:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.330 00:21:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:24.330 00:21:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:24.330 00:21:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:24.330 00:21:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:24.330 00:21:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:24.330 00:21:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:24.330 00:21:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.330 00:21:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.330 00:21:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:24.330 00:21:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:24.330 00:21:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:24.330 00:21:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:24.330 00:21:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:24.330 00:21:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.330 00:21:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:24.330 00:21:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:24.330 00:21:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:24.330 00:21:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:24.330 00:21:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:24.330 00:21:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:24.330 Cannot find device "nvmf_tgt_br" 00:12:24.330 00:21:11 -- nvmf/common.sh@154 -- # true 00:12:24.330 00:21:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:24.330 Cannot find device "nvmf_tgt_br2" 00:12:24.330 00:21:11 -- nvmf/common.sh@155 -- # true 00:12:24.330 00:21:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:24.330 00:21:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:24.330 Cannot find device "nvmf_tgt_br" 00:12:24.330 00:21:11 -- nvmf/common.sh@157 -- # true 00:12:24.330 00:21:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:24.330 Cannot find device "nvmf_tgt_br2" 00:12:24.330 00:21:11 -- nvmf/common.sh@158 -- # true 00:12:24.330 00:21:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:24.330 00:21:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:24.330 00:21:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:24.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.330 00:21:11 -- nvmf/common.sh@161 -- # true 00:12:24.330 00:21:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:24.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.330 00:21:11 -- nvmf/common.sh@162 -- # true 00:12:24.330 00:21:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:24.330 00:21:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:24.330 00:21:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:24.330 00:21:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:24.330 00:21:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:24.330 00:21:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:24.330 00:21:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:24.330 00:21:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:24.330 00:21:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:24.588 00:21:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:24.588 00:21:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:24.588 00:21:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:24.588 00:21:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:24.588 00:21:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:24.588 00:21:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:24.588 00:21:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:24.588 00:21:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:24.588 00:21:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:24.588 00:21:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:24.588 00:21:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:24.588 00:21:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:24.588 00:21:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:24.588 00:21:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:24.588 00:21:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:24.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:12:24.588 00:12:24.588 --- 10.0.0.2 ping statistics --- 00:12:24.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.588 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:24.588 00:21:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:24.588 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:24.588 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:12:24.588 00:12:24.588 --- 10.0.0.3 ping statistics --- 00:12:24.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.588 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:24.588 00:21:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:24.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:24.588 00:12:24.588 --- 10.0.0.1 ping statistics --- 00:12:24.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.588 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:24.588 00:21:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.588 00:21:11 -- nvmf/common.sh@421 -- # return 0 00:12:24.588 00:21:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:24.588 00:21:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.588 00:21:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:24.588 00:21:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:24.588 00:21:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.588 00:21:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:24.588 00:21:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:24.588 00:21:11 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:24.588 00:21:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:24.588 00:21:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:24.588 00:21:11 -- common/autotest_common.sh@10 -- # set +x 00:12:24.588 00:21:11 -- nvmf/common.sh@469 -- # nvmfpid=77406 00:12:24.588 00:21:11 -- nvmf/common.sh@470 -- # waitforlisten 77406 00:12:24.588 00:21:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.588 00:21:11 -- common/autotest_common.sh@819 -- # '[' -z 77406 ']' 00:12:24.588 00:21:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.588 00:21:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:24.588 00:21:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.588 00:21:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:24.588 00:21:11 -- common/autotest_common.sh@10 -- # set +x 00:12:24.588 [2024-07-13 00:21:11.761555] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:24.588 [2024-07-13 00:21:11.761682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.846 [2024-07-13 00:21:11.903495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.846 [2024-07-13 00:21:12.005654] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:24.846 [2024-07-13 00:21:12.006135] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.846 [2024-07-13 00:21:12.006190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.846 [2024-07-13 00:21:12.006314] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.846 [2024-07-13 00:21:12.006540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.846 [2024-07-13 00:21:12.006818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.846 [2024-07-13 00:21:12.006974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.846 [2024-07-13 00:21:12.006989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.784 00:21:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:25.784 00:21:12 -- common/autotest_common.sh@852 -- # return 0 00:12:25.784 00:21:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:25.784 00:21:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:25.784 00:21:12 -- common/autotest_common.sh@10 -- # set +x 00:12:25.784 00:21:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.784 00:21:12 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:25.784 00:21:12 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:25.785 00:21:12 -- target/multitarget.sh@21 -- # jq length 00:12:25.785 00:21:12 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:25.785 00:21:12 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:26.044 "nvmf_tgt_1" 00:12:26.044 00:21:13 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:26.044 "nvmf_tgt_2" 00:12:26.044 00:21:13 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.044 00:21:13 -- target/multitarget.sh@28 -- # jq length 00:12:26.300 00:21:13 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:26.300 00:21:13 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:26.300 true 00:12:26.300 00:21:13 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:26.557 true 00:12:26.557 00:21:13 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.557 00:21:13 -- target/multitarget.sh@35 -- # jq length 00:12:26.557 00:21:13 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:26.557 00:21:13 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:26.557 00:21:13 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:26.557 00:21:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:26.557 00:21:13 -- nvmf/common.sh@116 -- # sync 00:12:26.557 00:21:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:26.557 00:21:13 -- nvmf/common.sh@119 -- # set +e 00:12:26.557 00:21:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:26.557 00:21:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:26.557 rmmod nvme_tcp 00:12:26.557 rmmod nvme_fabrics 00:12:26.557 rmmod nvme_keyring 00:12:26.815 00:21:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:26.815 00:21:13 -- nvmf/common.sh@123 -- # set -e 00:12:26.815 00:21:13 -- nvmf/common.sh@124 -- # return 0 00:12:26.815 00:21:13 -- nvmf/common.sh@477 -- # '[' -n 77406 ']' 00:12:26.815 00:21:13 -- nvmf/common.sh@478 -- # killprocess 77406 00:12:26.815 00:21:13 -- common/autotest_common.sh@926 -- # '[' -z 77406 ']' 00:12:26.815 00:21:13 -- common/autotest_common.sh@930 -- # kill -0 77406 00:12:26.815 00:21:13 -- common/autotest_common.sh@931 -- # uname 00:12:26.815 00:21:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:26.815 00:21:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77406 00:12:26.815 00:21:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:26.815 00:21:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:26.815 killing process with pid 77406 00:12:26.815 00:21:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77406' 00:12:26.815 00:21:13 -- common/autotest_common.sh@945 -- # kill 77406 00:12:26.815 00:21:13 -- common/autotest_common.sh@950 -- # wait 77406 00:12:27.073 00:21:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:27.073 00:21:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:27.073 00:21:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:27.073 00:21:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.073 00:21:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:27.073 00:21:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.073 00:21:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.073 00:21:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.073 00:21:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:27.073 00:12:27.073 real 0m2.902s 00:12:27.073 user 0m9.502s 00:12:27.073 sys 0m0.757s 00:12:27.073 00:21:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.073 ************************************ 00:12:27.073 END TEST nvmf_multitarget 00:12:27.073 00:21:14 -- common/autotest_common.sh@10 -- # set +x 00:12:27.073 ************************************ 00:12:27.073 00:21:14 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:27.073 00:21:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:27.073 00:21:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:27.073 00:21:14 -- common/autotest_common.sh@10 -- # set +x 00:12:27.073 ************************************ 00:12:27.073 START TEST nvmf_rpc 00:12:27.073 ************************************ 00:12:27.073 00:21:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:27.073 * Looking for test storage... 00:12:27.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:27.073 00:21:14 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:27.073 00:21:14 -- nvmf/common.sh@7 -- # uname -s 00:12:27.073 00:21:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.073 00:21:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.073 00:21:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.073 00:21:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.073 00:21:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.073 00:21:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.073 00:21:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.073 00:21:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.073 00:21:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.073 00:21:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.073 00:21:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:12:27.073 00:21:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:12:27.073 00:21:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.073 00:21:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.073 00:21:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:27.073 00:21:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:27.073 00:21:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.073 00:21:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.073 00:21:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.330 00:21:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.330 00:21:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.330 00:21:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.330 00:21:14 -- paths/export.sh@5 -- # export PATH 00:12:27.330 00:21:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.330 00:21:14 -- nvmf/common.sh@46 -- # : 0 00:12:27.330 00:21:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:27.330 00:21:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:27.330 00:21:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:27.330 00:21:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.330 00:21:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.330 00:21:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:27.330 00:21:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:27.330 00:21:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:27.330 00:21:14 -- target/rpc.sh@11 -- # loops=5 00:12:27.330 00:21:14 -- target/rpc.sh@23 -- # nvmftestinit 00:12:27.330 00:21:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:27.330 00:21:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.330 00:21:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:27.330 00:21:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:27.330 00:21:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:27.330 00:21:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.330 00:21:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.330 00:21:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.330 00:21:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:27.330 00:21:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:27.330 00:21:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:27.330 00:21:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:27.330 00:21:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:27.330 00:21:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:27.330 00:21:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.330 00:21:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.330 00:21:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:27.330 00:21:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:27.330 00:21:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:27.330 00:21:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:27.330 00:21:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:27.330 00:21:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.330 00:21:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:27.330 00:21:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:27.330 00:21:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:27.330 00:21:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:27.330 00:21:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:27.330 00:21:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:27.330 Cannot find device "nvmf_tgt_br" 00:12:27.330 00:21:14 -- nvmf/common.sh@154 -- # true 00:12:27.330 00:21:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:27.330 Cannot find device "nvmf_tgt_br2" 00:12:27.330 00:21:14 -- nvmf/common.sh@155 -- # true 00:12:27.330 00:21:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:27.330 00:21:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:27.330 Cannot find device "nvmf_tgt_br" 00:12:27.330 00:21:14 -- nvmf/common.sh@157 -- # true 00:12:27.330 00:21:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:27.330 Cannot find device "nvmf_tgt_br2" 00:12:27.330 00:21:14 -- nvmf/common.sh@158 -- # true 00:12:27.330 00:21:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:27.330 00:21:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:27.330 00:21:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:27.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.330 00:21:14 -- nvmf/common.sh@161 -- # true 00:12:27.330 00:21:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:27.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.330 00:21:14 -- nvmf/common.sh@162 -- # true 00:12:27.330 00:21:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:27.330 00:21:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:27.330 00:21:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:27.330 00:21:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:27.330 00:21:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:27.330 00:21:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:27.330 00:21:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:27.330 00:21:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:27.330 00:21:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:27.588 00:21:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:27.588 00:21:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:27.588 00:21:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:27.588 00:21:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:27.588 00:21:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:27.588 00:21:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:27.588 00:21:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:27.588 00:21:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:27.588 00:21:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:27.588 00:21:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:27.588 00:21:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:27.588 00:21:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:27.588 00:21:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:27.588 00:21:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:27.588 00:21:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:27.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:12:27.588 00:12:27.588 --- 10.0.0.2 ping statistics --- 00:12:27.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.588 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:27.588 00:21:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:27.588 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:27.588 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:12:27.588 00:12:27.588 --- 10.0.0.3 ping statistics --- 00:12:27.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.588 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:27.588 00:21:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:27.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:27.588 00:12:27.588 --- 10.0.0.1 ping statistics --- 00:12:27.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.588 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:27.588 00:21:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.588 00:21:14 -- nvmf/common.sh@421 -- # return 0 00:12:27.588 00:21:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:27.588 00:21:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.588 00:21:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:27.588 00:21:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:27.588 00:21:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.588 00:21:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:27.588 00:21:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:27.588 00:21:14 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:27.588 00:21:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:27.588 00:21:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:27.588 00:21:14 -- common/autotest_common.sh@10 -- # set +x 00:12:27.588 00:21:14 -- nvmf/common.sh@469 -- # nvmfpid=77643 00:12:27.588 00:21:14 -- nvmf/common.sh@470 -- # waitforlisten 77643 00:12:27.588 00:21:14 -- common/autotest_common.sh@819 -- # '[' -z 77643 ']' 00:12:27.588 00:21:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.588 00:21:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.588 00:21:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:27.588 00:21:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.588 00:21:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:27.588 00:21:14 -- common/autotest_common.sh@10 -- # set +x 00:12:27.588 [2024-07-13 00:21:14.746747] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:27.588 [2024-07-13 00:21:14.746845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.846 [2024-07-13 00:21:14.881064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.846 [2024-07-13 00:21:14.992734] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:27.846 [2024-07-13 00:21:14.992923] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.846 [2024-07-13 00:21:14.992954] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.846 [2024-07-13 00:21:14.992963] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.846 [2024-07-13 00:21:14.993088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.846 [2024-07-13 00:21:14.993262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.846 [2024-07-13 00:21:14.993816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.846 [2024-07-13 00:21:14.993821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.776 00:21:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:28.776 00:21:15 -- common/autotest_common.sh@852 -- # return 0 00:12:28.776 00:21:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:28.776 00:21:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:28.776 00:21:15 -- common/autotest_common.sh@10 -- # set +x 00:12:28.776 00:21:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.776 00:21:15 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:28.776 00:21:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.776 00:21:15 -- common/autotest_common.sh@10 -- # set +x 00:12:28.776 00:21:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.776 00:21:15 -- target/rpc.sh@26 -- # stats='{ 00:12:28.776 "poll_groups": [ 00:12:28.776 { 00:12:28.776 "admin_qpairs": 0, 00:12:28.776 "completed_nvme_io": 0, 00:12:28.776 "current_admin_qpairs": 0, 00:12:28.776 "current_io_qpairs": 0, 00:12:28.776 "io_qpairs": 0, 00:12:28.776 "name": "nvmf_tgt_poll_group_0", 00:12:28.776 "pending_bdev_io": 0, 00:12:28.776 "transports": [] 00:12:28.776 }, 00:12:28.776 { 00:12:28.776 "admin_qpairs": 0, 00:12:28.776 "completed_nvme_io": 0, 00:12:28.776 "current_admin_qpairs": 0, 00:12:28.776 "current_io_qpairs": 0, 00:12:28.776 "io_qpairs": 0, 00:12:28.776 "name": "nvmf_tgt_poll_group_1", 00:12:28.776 "pending_bdev_io": 0, 00:12:28.776 "transports": [] 00:12:28.776 }, 00:12:28.776 { 00:12:28.776 "admin_qpairs": 0, 00:12:28.776 "completed_nvme_io": 0, 00:12:28.776 "current_admin_qpairs": 0, 00:12:28.776 "current_io_qpairs": 0, 00:12:28.776 "io_qpairs": 0, 00:12:28.776 "name": "nvmf_tgt_poll_group_2", 00:12:28.776 "pending_bdev_io": 0, 00:12:28.776 "transports": [] 00:12:28.776 }, 00:12:28.776 { 00:12:28.776 "admin_qpairs": 0, 00:12:28.776 "completed_nvme_io": 0, 00:12:28.776 "current_admin_qpairs": 0, 00:12:28.776 "current_io_qpairs": 0, 00:12:28.776 "io_qpairs": 0, 00:12:28.776 "name": "nvmf_tgt_poll_group_3", 00:12:28.776 "pending_bdev_io": 0, 00:12:28.776 "transports": [] 00:12:28.776 } 00:12:28.776 ], 00:12:28.776 "tick_rate": 2200000000 00:12:28.776 }' 00:12:28.776 00:21:15 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:28.776 00:21:15 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:28.776 00:21:15 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:28.776 00:21:15 -- target/rpc.sh@15 -- # wc -l 00:12:28.776 00:21:15 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:28.776 00:21:15 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:28.776 00:21:15 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:28.776 00:21:15 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.776 00:21:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.776 00:21:15 -- common/autotest_common.sh@10 -- # set +x 00:12:28.776 [2024-07-13 00:21:15.925577] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.776 00:21:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.776 00:21:15 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:28.776 00:21:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.776 00:21:15 -- common/autotest_common.sh@10 -- # set +x 00:12:28.776 00:21:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.776 00:21:15 -- target/rpc.sh@33 -- # stats='{ 00:12:28.776 "poll_groups": [ 00:12:28.776 { 00:12:28.776 "admin_qpairs": 0, 00:12:28.776 "completed_nvme_io": 0, 00:12:28.776 "current_admin_qpairs": 0, 00:12:28.776 "current_io_qpairs": 0, 00:12:28.776 "io_qpairs": 0, 00:12:28.776 "name": "nvmf_tgt_poll_group_0", 00:12:28.776 "pending_bdev_io": 0, 00:12:28.776 "transports": [ 00:12:28.776 { 00:12:28.776 "trtype": "TCP" 00:12:28.776 } 00:12:28.776 ] 00:12:28.776 }, 00:12:28.776 { 00:12:28.776 "admin_qpairs": 0, 00:12:28.776 "completed_nvme_io": 0, 00:12:28.776 "current_admin_qpairs": 0, 00:12:28.776 "current_io_qpairs": 0, 00:12:28.776 "io_qpairs": 0, 00:12:28.776 "name": "nvmf_tgt_poll_group_1", 00:12:28.776 "pending_bdev_io": 0, 00:12:28.776 "transports": [ 00:12:28.776 { 00:12:28.776 "trtype": "TCP" 00:12:28.776 } 00:12:28.776 ] 00:12:28.776 }, 00:12:28.776 { 00:12:28.776 "admin_qpairs": 0, 00:12:28.776 "completed_nvme_io": 0, 00:12:28.776 "current_admin_qpairs": 0, 00:12:28.776 "current_io_qpairs": 0, 00:12:28.776 "io_qpairs": 0, 00:12:28.776 "name": "nvmf_tgt_poll_group_2", 00:12:28.776 "pending_bdev_io": 0, 00:12:28.776 "transports": [ 00:12:28.776 { 00:12:28.776 "trtype": "TCP" 00:12:28.776 } 00:12:28.776 ] 00:12:28.776 }, 00:12:28.776 { 00:12:28.776 "admin_qpairs": 0, 00:12:28.776 "completed_nvme_io": 0, 00:12:28.776 "current_admin_qpairs": 0, 00:12:28.776 "current_io_qpairs": 0, 00:12:28.776 "io_qpairs": 0, 00:12:28.776 "name": "nvmf_tgt_poll_group_3", 00:12:28.776 "pending_bdev_io": 0, 00:12:28.776 "transports": [ 00:12:28.776 { 00:12:28.776 "trtype": "TCP" 00:12:28.776 } 00:12:28.776 ] 00:12:28.776 } 00:12:28.776 ], 00:12:28.776 "tick_rate": 2200000000 00:12:28.776 }' 00:12:28.776 00:21:15 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:28.776 00:21:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:28.776 00:21:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:28.776 00:21:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:29.033 00:21:16 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:29.033 00:21:16 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:29.033 00:21:16 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:29.033 00:21:16 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:29.033 00:21:16 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:29.033 00:21:16 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:29.033 00:21:16 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:29.033 00:21:16 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:29.033 00:21:16 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:29.033 00:21:16 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:29.033 00:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.033 00:21:16 -- common/autotest_common.sh@10 -- # set +x 00:12:29.033 Malloc1 00:12:29.033 00:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.033 00:21:16 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:29.033 00:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.033 00:21:16 -- common/autotest_common.sh@10 -- # set +x 00:12:29.033 00:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.033 00:21:16 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.033 00:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.033 00:21:16 -- common/autotest_common.sh@10 -- # set +x 00:12:29.033 00:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.033 00:21:16 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:29.033 00:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.033 00:21:16 -- common/autotest_common.sh@10 -- # set +x 00:12:29.033 00:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.033 00:21:16 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.033 00:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.033 00:21:16 -- common/autotest_common.sh@10 -- # set +x 00:12:29.033 [2024-07-13 00:21:16.125320] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.033 00:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.033 00:21:16 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 -a 10.0.0.2 -s 4420 00:12:29.033 00:21:16 -- common/autotest_common.sh@640 -- # local es=0 00:12:29.033 00:21:16 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 -a 10.0.0.2 -s 4420 00:12:29.033 00:21:16 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:29.033 00:21:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:29.033 00:21:16 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:29.033 00:21:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:29.033 00:21:16 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:29.033 00:21:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:29.033 00:21:16 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:29.033 00:21:16 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:29.033 00:21:16 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 -a 10.0.0.2 -s 4420 00:12:29.033 [2024-07-13 00:21:16.153712] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192' 00:12:29.033 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:29.033 could not add new controller: failed to write to nvme-fabrics device 00:12:29.033 00:21:16 -- common/autotest_common.sh@643 -- # es=1 00:12:29.033 00:21:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:29.033 00:21:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:29.033 00:21:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:29.033 00:21:16 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:12:29.033 00:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.033 00:21:16 -- common/autotest_common.sh@10 -- # set +x 00:12:29.033 00:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.033 00:21:16 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.289 00:21:16 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.289 00:21:16 -- common/autotest_common.sh@1177 -- # local i=0 00:12:29.289 00:21:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.289 00:21:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:29.289 00:21:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:31.182 00:21:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:31.182 00:21:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:31.182 00:21:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.182 00:21:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:31.182 00:21:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.182 00:21:18 -- common/autotest_common.sh@1187 -- # return 0 00:12:31.182 00:21:18 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.182 00:21:18 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.182 00:21:18 -- common/autotest_common.sh@1198 -- # local i=0 00:12:31.182 00:21:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:31.182 00:21:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.438 00:21:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.438 00:21:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:31.438 00:21:18 -- common/autotest_common.sh@1210 -- # return 0 00:12:31.438 00:21:18 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:12:31.438 00:21:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.438 00:21:18 -- common/autotest_common.sh@10 -- # set +x 00:12:31.438 00:21:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.438 00:21:18 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.438 00:21:18 -- common/autotest_common.sh@640 -- # local es=0 00:12:31.439 00:21:18 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.439 00:21:18 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:31.439 00:21:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:31.439 00:21:18 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:31.439 00:21:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:31.439 00:21:18 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:31.439 00:21:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:31.439 00:21:18 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:31.439 00:21:18 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:31.439 00:21:18 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.439 [2024-07-13 00:21:18.455377] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192' 00:12:31.439 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:31.439 could not add new controller: failed to write to nvme-fabrics device 00:12:31.439 00:21:18 -- common/autotest_common.sh@643 -- # es=1 00:12:31.439 00:21:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:31.439 00:21:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:31.439 00:21:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:31.439 00:21:18 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:31.439 00:21:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.439 00:21:18 -- common/autotest_common.sh@10 -- # set +x 00:12:31.439 00:21:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.439 00:21:18 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.439 00:21:18 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.439 00:21:18 -- common/autotest_common.sh@1177 -- # local i=0 00:12:31.439 00:21:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.439 00:21:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:31.439 00:21:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:33.961 00:21:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:33.961 00:21:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:33.961 00:21:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.961 00:21:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:33.961 00:21:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.961 00:21:20 -- common/autotest_common.sh@1187 -- # return 0 00:12:33.961 00:21:20 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.961 00:21:20 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.961 00:21:20 -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.961 00:21:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:33.961 00:21:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.961 00:21:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:33.961 00:21:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.961 00:21:20 -- common/autotest_common.sh@1210 -- # return 0 00:12:33.961 00:21:20 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.961 00:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.961 00:21:20 -- common/autotest_common.sh@10 -- # set +x 00:12:33.961 00:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.961 00:21:20 -- target/rpc.sh@81 -- # seq 1 5 00:12:33.961 00:21:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.961 00:21:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.961 00:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.961 00:21:20 -- common/autotest_common.sh@10 -- # set +x 00:12:33.961 00:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.961 00:21:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.961 00:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.961 00:21:20 -- common/autotest_common.sh@10 -- # set +x 00:12:33.961 [2024-07-13 00:21:20.764704] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.961 00:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.961 00:21:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.961 00:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.961 00:21:20 -- common/autotest_common.sh@10 -- # set +x 00:12:33.961 00:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.961 00:21:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.961 00:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.961 00:21:20 -- common/autotest_common.sh@10 -- # set +x 00:12:33.961 00:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.961 00:21:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.961 00:21:20 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.961 00:21:20 -- common/autotest_common.sh@1177 -- # local i=0 00:12:33.961 00:21:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.961 00:21:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:33.961 00:21:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:35.858 00:21:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:35.858 00:21:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:35.858 00:21:22 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.858 00:21:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:35.858 00:21:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.858 00:21:22 -- common/autotest_common.sh@1187 -- # return 0 00:12:35.858 00:21:22 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.858 00:21:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.858 00:21:23 -- common/autotest_common.sh@1198 -- # local i=0 00:12:35.858 00:21:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:35.858 00:21:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.858 00:21:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:35.858 00:21:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.858 00:21:23 -- common/autotest_common.sh@1210 -- # return 0 00:12:35.858 00:21:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.859 00:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.859 00:21:23 -- common/autotest_common.sh@10 -- # set +x 00:12:35.859 00:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.859 00:21:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.859 00:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.859 00:21:23 -- common/autotest_common.sh@10 -- # set +x 00:12:35.859 00:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.859 00:21:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.859 00:21:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.859 00:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.859 00:21:23 -- common/autotest_common.sh@10 -- # set +x 00:12:35.859 00:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.859 00:21:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.859 00:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.859 00:21:23 -- common/autotest_common.sh@10 -- # set +x 00:12:35.859 [2024-07-13 00:21:23.077752] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.859 00:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.859 00:21:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.859 00:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.859 00:21:23 -- common/autotest_common.sh@10 -- # set +x 00:12:36.115 00:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.115 00:21:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.115 00:21:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.115 00:21:23 -- common/autotest_common.sh@10 -- # set +x 00:12:36.115 00:21:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.115 00:21:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.115 00:21:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.115 00:21:23 -- common/autotest_common.sh@1177 -- # local i=0 00:12:36.115 00:21:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.115 00:21:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:36.115 00:21:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:38.640 00:21:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:38.640 00:21:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:38.640 00:21:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.640 00:21:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:38.640 00:21:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.640 00:21:25 -- common/autotest_common.sh@1187 -- # return 0 00:12:38.640 00:21:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.640 00:21:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.640 00:21:25 -- common/autotest_common.sh@1198 -- # local i=0 00:12:38.640 00:21:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:38.640 00:21:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.640 00:21:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:38.640 00:21:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.640 00:21:25 -- common/autotest_common.sh@1210 -- # return 0 00:12:38.640 00:21:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.640 00:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.640 00:21:25 -- common/autotest_common.sh@10 -- # set +x 00:12:38.640 00:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.640 00:21:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.640 00:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.640 00:21:25 -- common/autotest_common.sh@10 -- # set +x 00:12:38.640 00:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.640 00:21:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.640 00:21:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.640 00:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.640 00:21:25 -- common/autotest_common.sh@10 -- # set +x 00:12:38.640 00:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.640 00:21:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.640 00:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.640 00:21:25 -- common/autotest_common.sh@10 -- # set +x 00:12:38.640 [2024-07-13 00:21:25.386907] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.640 00:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.640 00:21:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.640 00:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.640 00:21:25 -- common/autotest_common.sh@10 -- # set +x 00:12:38.640 00:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.640 00:21:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.640 00:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.640 00:21:25 -- common/autotest_common.sh@10 -- # set +x 00:12:38.640 00:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.640 00:21:25 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.640 00:21:25 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.640 00:21:25 -- common/autotest_common.sh@1177 -- # local i=0 00:12:38.640 00:21:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.640 00:21:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:38.640 00:21:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:40.538 00:21:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:40.538 00:21:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:40.538 00:21:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.538 00:21:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:40.538 00:21:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.538 00:21:27 -- common/autotest_common.sh@1187 -- # return 0 00:12:40.538 00:21:27 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.538 00:21:27 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.538 00:21:27 -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.538 00:21:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.538 00:21:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:40.538 00:21:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.538 00:21:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:40.800 00:21:27 -- common/autotest_common.sh@1210 -- # return 0 00:12:40.800 00:21:27 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.800 00:21:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.800 00:21:27 -- common/autotest_common.sh@10 -- # set +x 00:12:40.800 00:21:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.800 00:21:27 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.800 00:21:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.800 00:21:27 -- common/autotest_common.sh@10 -- # set +x 00:12:40.800 00:21:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.800 00:21:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.800 00:21:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.800 00:21:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.800 00:21:27 -- common/autotest_common.sh@10 -- # set +x 00:12:40.800 00:21:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.800 00:21:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.800 00:21:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.800 00:21:27 -- common/autotest_common.sh@10 -- # set +x 00:12:40.800 [2024-07-13 00:21:27.811886] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.800 00:21:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.800 00:21:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.800 00:21:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.800 00:21:27 -- common/autotest_common.sh@10 -- # set +x 00:12:40.800 00:21:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.800 00:21:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.800 00:21:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.800 00:21:27 -- common/autotest_common.sh@10 -- # set +x 00:12:40.800 00:21:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.800 00:21:27 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.800 00:21:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.800 00:21:28 -- common/autotest_common.sh@1177 -- # local i=0 00:12:40.800 00:21:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.800 00:21:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:40.800 00:21:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:43.360 00:21:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:43.360 00:21:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:43.360 00:21:30 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.360 00:21:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:43.360 00:21:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.360 00:21:30 -- common/autotest_common.sh@1187 -- # return 0 00:12:43.360 00:21:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.360 00:21:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.360 00:21:30 -- common/autotest_common.sh@1198 -- # local i=0 00:12:43.360 00:21:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:43.360 00:21:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.360 00:21:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:43.360 00:21:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.360 00:21:30 -- common/autotest_common.sh@1210 -- # return 0 00:12:43.360 00:21:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.360 00:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.360 00:21:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.360 00:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.360 00:21:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.360 00:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.360 00:21:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.360 00:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.360 00:21:30 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.360 00:21:30 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.360 00:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.360 00:21:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.360 00:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.360 00:21:30 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.360 00:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.360 00:21:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.360 [2024-07-13 00:21:30.125345] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.360 00:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.360 00:21:30 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.360 00:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.360 00:21:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.360 00:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.360 00:21:30 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.360 00:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.360 00:21:30 -- common/autotest_common.sh@10 -- # set +x 00:12:43.360 00:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.360 00:21:30 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.360 00:21:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.360 00:21:30 -- common/autotest_common.sh@1177 -- # local i=0 00:12:43.360 00:21:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.360 00:21:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:43.360 00:21:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:45.258 00:21:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:45.258 00:21:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:45.258 00:21:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.258 00:21:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:45.258 00:21:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.258 00:21:32 -- common/autotest_common.sh@1187 -- # return 0 00:12:45.258 00:21:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.258 00:21:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.258 00:21:32 -- common/autotest_common.sh@1198 -- # local i=0 00:12:45.258 00:21:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:45.258 00:21:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.258 00:21:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:45.258 00:21:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.258 00:21:32 -- common/autotest_common.sh@1210 -- # return 0 00:12:45.258 00:21:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.258 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.258 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.258 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.258 00:21:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.258 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.258 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.258 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.258 00:21:32 -- target/rpc.sh@99 -- # seq 1 5 00:12:45.258 00:21:32 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.258 00:21:32 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.258 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.258 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.258 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.258 00:21:32 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.258 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.258 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.258 [2024-07-13 00:21:32.436582] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.258 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.258 00:21:32 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.258 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.258 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.258 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.258 00:21:32 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.258 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.258 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.258 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.258 00:21:32 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.258 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.258 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.258 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.258 00:21:32 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.258 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.258 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.258 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.258 00:21:32 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.258 00:21:32 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.258 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.258 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.258 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.258 00:21:32 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.258 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.258 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 [2024-07-13 00:21:32.488607] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.517 00:21:32 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 [2024-07-13 00:21:32.540605] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.517 00:21:32 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 [2024-07-13 00:21:32.588729] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.517 00:21:32 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 [2024-07-13 00:21:32.640740] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:45.517 00:21:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.517 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:12:45.517 00:21:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.517 00:21:32 -- target/rpc.sh@110 -- # stats='{ 00:12:45.517 "poll_groups": [ 00:12:45.517 { 00:12:45.517 "admin_qpairs": 2, 00:12:45.517 "completed_nvme_io": 68, 00:12:45.517 "current_admin_qpairs": 0, 00:12:45.517 "current_io_qpairs": 0, 00:12:45.517 "io_qpairs": 16, 00:12:45.517 "name": "nvmf_tgt_poll_group_0", 00:12:45.517 "pending_bdev_io": 0, 00:12:45.517 "transports": [ 00:12:45.517 { 00:12:45.517 "trtype": "TCP" 00:12:45.517 } 00:12:45.517 ] 00:12:45.517 }, 00:12:45.517 { 00:12:45.517 "admin_qpairs": 3, 00:12:45.517 "completed_nvme_io": 117, 00:12:45.517 "current_admin_qpairs": 0, 00:12:45.517 "current_io_qpairs": 0, 00:12:45.517 "io_qpairs": 17, 00:12:45.517 "name": "nvmf_tgt_poll_group_1", 00:12:45.517 "pending_bdev_io": 0, 00:12:45.517 "transports": [ 00:12:45.517 { 00:12:45.517 "trtype": "TCP" 00:12:45.517 } 00:12:45.517 ] 00:12:45.517 }, 00:12:45.517 { 00:12:45.517 "admin_qpairs": 1, 00:12:45.517 "completed_nvme_io": 167, 00:12:45.517 "current_admin_qpairs": 0, 00:12:45.517 "current_io_qpairs": 0, 00:12:45.517 "io_qpairs": 19, 00:12:45.517 "name": "nvmf_tgt_poll_group_2", 00:12:45.517 "pending_bdev_io": 0, 00:12:45.517 "transports": [ 00:12:45.517 { 00:12:45.517 "trtype": "TCP" 00:12:45.517 } 00:12:45.517 ] 00:12:45.517 }, 00:12:45.517 { 00:12:45.517 "admin_qpairs": 1, 00:12:45.517 "completed_nvme_io": 68, 00:12:45.517 "current_admin_qpairs": 0, 00:12:45.517 "current_io_qpairs": 0, 00:12:45.517 "io_qpairs": 18, 00:12:45.517 "name": "nvmf_tgt_poll_group_3", 00:12:45.517 "pending_bdev_io": 0, 00:12:45.517 "transports": [ 00:12:45.517 { 00:12:45.517 "trtype": "TCP" 00:12:45.517 } 00:12:45.517 ] 00:12:45.517 } 00:12:45.517 ], 00:12:45.517 "tick_rate": 2200000000 00:12:45.517 }' 00:12:45.517 00:21:32 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:45.517 00:21:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:45.517 00:21:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.517 00:21:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:45.774 00:21:32 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:45.774 00:21:32 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:45.774 00:21:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:45.774 00:21:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:45.774 00:21:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.774 00:21:32 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:45.774 00:21:32 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:45.774 00:21:32 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:45.774 00:21:32 -- target/rpc.sh@123 -- # nvmftestfini 00:12:45.774 00:21:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:45.774 00:21:32 -- nvmf/common.sh@116 -- # sync 00:12:45.775 00:21:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:45.775 00:21:32 -- nvmf/common.sh@119 -- # set +e 00:12:45.775 00:21:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:45.775 00:21:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:45.775 rmmod nvme_tcp 00:12:45.775 rmmod nvme_fabrics 00:12:45.775 rmmod nvme_keyring 00:12:45.775 00:21:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:45.775 00:21:32 -- nvmf/common.sh@123 -- # set -e 00:12:45.775 00:21:32 -- nvmf/common.sh@124 -- # return 0 00:12:45.775 00:21:32 -- nvmf/common.sh@477 -- # '[' -n 77643 ']' 00:12:45.775 00:21:32 -- nvmf/common.sh@478 -- # killprocess 77643 00:12:45.775 00:21:32 -- common/autotest_common.sh@926 -- # '[' -z 77643 ']' 00:12:45.775 00:21:32 -- common/autotest_common.sh@930 -- # kill -0 77643 00:12:45.775 00:21:32 -- common/autotest_common.sh@931 -- # uname 00:12:45.775 00:21:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:45.775 00:21:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77643 00:12:45.775 killing process with pid 77643 00:12:45.775 00:21:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:45.775 00:21:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:45.775 00:21:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77643' 00:12:45.775 00:21:32 -- common/autotest_common.sh@945 -- # kill 77643 00:12:45.775 00:21:32 -- common/autotest_common.sh@950 -- # wait 77643 00:12:46.339 00:21:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:46.339 00:21:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:46.339 00:21:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:46.339 00:21:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.339 00:21:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:46.339 00:21:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.339 00:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.339 00:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.339 00:21:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:46.339 00:12:46.339 real 0m19.105s 00:12:46.339 user 1m12.406s 00:12:46.339 sys 0m2.095s 00:12:46.339 00:21:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.339 ************************************ 00:12:46.339 00:21:33 -- common/autotest_common.sh@10 -- # set +x 00:12:46.339 END TEST nvmf_rpc 00:12:46.339 ************************************ 00:12:46.339 00:21:33 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:46.339 00:21:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:46.339 00:21:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:46.339 00:21:33 -- common/autotest_common.sh@10 -- # set +x 00:12:46.339 ************************************ 00:12:46.339 START TEST nvmf_invalid 00:12:46.339 ************************************ 00:12:46.339 00:21:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:46.339 * Looking for test storage... 00:12:46.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:46.339 00:21:33 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:46.339 00:21:33 -- nvmf/common.sh@7 -- # uname -s 00:12:46.339 00:21:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.339 00:21:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.339 00:21:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.339 00:21:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.339 00:21:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.339 00:21:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.339 00:21:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.339 00:21:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.339 00:21:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.339 00:21:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.339 00:21:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:12:46.339 00:21:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:12:46.339 00:21:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.339 00:21:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.339 00:21:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:46.339 00:21:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:46.339 00:21:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.339 00:21:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.339 00:21:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.339 00:21:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.339 00:21:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.339 00:21:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.339 00:21:33 -- paths/export.sh@5 -- # export PATH 00:12:46.339 00:21:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.339 00:21:33 -- nvmf/common.sh@46 -- # : 0 00:12:46.339 00:21:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:46.339 00:21:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:46.339 00:21:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:46.339 00:21:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.339 00:21:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.339 00:21:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:46.339 00:21:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:46.339 00:21:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:46.339 00:21:33 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:46.339 00:21:33 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.339 00:21:33 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:46.339 00:21:33 -- target/invalid.sh@14 -- # target=foobar 00:12:46.339 00:21:33 -- target/invalid.sh@16 -- # RANDOM=0 00:12:46.339 00:21:33 -- target/invalid.sh@34 -- # nvmftestinit 00:12:46.339 00:21:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:46.339 00:21:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.339 00:21:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:46.339 00:21:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:46.339 00:21:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:46.339 00:21:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.339 00:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.339 00:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.339 00:21:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:46.339 00:21:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:46.339 00:21:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:46.339 00:21:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:46.339 00:21:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:46.339 00:21:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:46.339 00:21:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.339 00:21:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.339 00:21:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:46.339 00:21:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:46.339 00:21:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:46.339 00:21:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:46.339 00:21:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:46.339 00:21:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.339 00:21:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:46.339 00:21:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:46.339 00:21:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:46.339 00:21:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:46.339 00:21:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:46.339 00:21:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:46.339 Cannot find device "nvmf_tgt_br" 00:12:46.339 00:21:33 -- nvmf/common.sh@154 -- # true 00:12:46.339 00:21:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:46.339 Cannot find device "nvmf_tgt_br2" 00:12:46.339 00:21:33 -- nvmf/common.sh@155 -- # true 00:12:46.339 00:21:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:46.339 00:21:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:46.339 Cannot find device "nvmf_tgt_br" 00:12:46.339 00:21:33 -- nvmf/common.sh@157 -- # true 00:12:46.339 00:21:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:46.339 Cannot find device "nvmf_tgt_br2" 00:12:46.339 00:21:33 -- nvmf/common.sh@158 -- # true 00:12:46.339 00:21:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:46.596 00:21:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:46.596 00:21:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:46.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.596 00:21:33 -- nvmf/common.sh@161 -- # true 00:12:46.596 00:21:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:46.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:46.596 00:21:33 -- nvmf/common.sh@162 -- # true 00:12:46.596 00:21:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:46.596 00:21:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:46.596 00:21:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:46.596 00:21:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:46.596 00:21:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:46.596 00:21:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:46.596 00:21:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:46.596 00:21:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:46.596 00:21:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:46.596 00:21:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:46.596 00:21:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:46.596 00:21:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:46.596 00:21:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:46.596 00:21:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:46.596 00:21:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:46.596 00:21:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:46.596 00:21:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:46.596 00:21:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:46.596 00:21:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:46.596 00:21:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:46.596 00:21:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:46.596 00:21:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:46.596 00:21:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:46.596 00:21:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:46.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:12:46.596 00:12:46.596 --- 10.0.0.2 ping statistics --- 00:12:46.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.596 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:46.597 00:21:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:46.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:46.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:46.597 00:12:46.597 --- 10.0.0.3 ping statistics --- 00:12:46.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.597 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:46.597 00:21:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:46.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:46.597 00:12:46.597 --- 10.0.0.1 ping statistics --- 00:12:46.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.597 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:46.597 00:21:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.597 00:21:33 -- nvmf/common.sh@421 -- # return 0 00:12:46.597 00:21:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:46.597 00:21:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.597 00:21:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:46.597 00:21:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:46.597 00:21:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.597 00:21:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:46.597 00:21:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:46.597 00:21:33 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:46.597 00:21:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:46.597 00:21:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:46.597 00:21:33 -- common/autotest_common.sh@10 -- # set +x 00:12:46.597 00:21:33 -- nvmf/common.sh@469 -- # nvmfpid=78149 00:12:46.597 00:21:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.597 00:21:33 -- nvmf/common.sh@470 -- # waitforlisten 78149 00:12:46.597 00:21:33 -- common/autotest_common.sh@819 -- # '[' -z 78149 ']' 00:12:46.597 00:21:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.597 00:21:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:46.597 00:21:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.597 00:21:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:46.597 00:21:33 -- common/autotest_common.sh@10 -- # set +x 00:12:46.854 [2024-07-13 00:21:33.879563] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:46.854 [2024-07-13 00:21:33.879677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.854 [2024-07-13 00:21:34.019460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.111 [2024-07-13 00:21:34.132173] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:47.111 [2024-07-13 00:21:34.132394] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.111 [2024-07-13 00:21:34.132412] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.111 [2024-07-13 00:21:34.132423] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.111 [2024-07-13 00:21:34.132671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.112 [2024-07-13 00:21:34.133004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.112 [2024-07-13 00:21:34.133091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.112 [2024-07-13 00:21:34.133093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.690 00:21:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:47.690 00:21:34 -- common/autotest_common.sh@852 -- # return 0 00:12:47.690 00:21:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:47.690 00:21:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:47.690 00:21:34 -- common/autotest_common.sh@10 -- # set +x 00:12:47.690 00:21:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.690 00:21:34 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:47.947 00:21:34 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21169 00:12:47.947 [2024-07-13 00:21:35.169651] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:48.203 00:21:35 -- target/invalid.sh@40 -- # out='2024/07/13 00:21:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21169 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:48.204 request: 00:12:48.204 { 00:12:48.204 "method": "nvmf_create_subsystem", 00:12:48.204 "params": { 00:12:48.204 "nqn": "nqn.2016-06.io.spdk:cnode21169", 00:12:48.204 "tgt_name": "foobar" 00:12:48.204 } 00:12:48.204 } 00:12:48.204 Got JSON-RPC error response 00:12:48.204 GoRPCClient: error on JSON-RPC call' 00:12:48.204 00:21:35 -- target/invalid.sh@41 -- # [[ 2024/07/13 00:21:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21169 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:48.204 request: 00:12:48.204 { 00:12:48.204 "method": "nvmf_create_subsystem", 00:12:48.204 "params": { 00:12:48.204 "nqn": "nqn.2016-06.io.spdk:cnode21169", 00:12:48.204 "tgt_name": "foobar" 00:12:48.204 } 00:12:48.204 } 00:12:48.204 Got JSON-RPC error response 00:12:48.204 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:48.204 00:21:35 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:48.204 00:21:35 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12111 00:12:48.461 [2024-07-13 00:21:35.470232] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12111: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:48.461 00:21:35 -- target/invalid.sh@45 -- # out='2024/07/13 00:21:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12111 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:48.461 request: 00:12:48.461 { 00:12:48.461 "method": "nvmf_create_subsystem", 00:12:48.461 "params": { 00:12:48.461 "nqn": "nqn.2016-06.io.spdk:cnode12111", 00:12:48.461 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:48.461 } 00:12:48.461 } 00:12:48.461 Got JSON-RPC error response 00:12:48.461 GoRPCClient: error on JSON-RPC call' 00:12:48.461 00:21:35 -- target/invalid.sh@46 -- # [[ 2024/07/13 00:21:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12111 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:48.461 request: 00:12:48.461 { 00:12:48.461 "method": "nvmf_create_subsystem", 00:12:48.461 "params": { 00:12:48.461 "nqn": "nqn.2016-06.io.spdk:cnode12111", 00:12:48.461 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:48.461 } 00:12:48.461 } 00:12:48.461 Got JSON-RPC error response 00:12:48.461 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:48.461 00:21:35 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:48.461 00:21:35 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27633 00:12:48.718 [2024-07-13 00:21:35.754671] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27633: invalid model number 'SPDK_Controller' 00:12:48.718 00:21:35 -- target/invalid.sh@50 -- # out='2024/07/13 00:21:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode27633], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:48.718 request: 00:12:48.718 { 00:12:48.718 "method": "nvmf_create_subsystem", 00:12:48.718 "params": { 00:12:48.718 "nqn": "nqn.2016-06.io.spdk:cnode27633", 00:12:48.718 "model_number": "SPDK_Controller\u001f" 00:12:48.718 } 00:12:48.718 } 00:12:48.718 Got JSON-RPC error response 00:12:48.718 GoRPCClient: error on JSON-RPC call' 00:12:48.718 00:21:35 -- target/invalid.sh@51 -- # [[ 2024/07/13 00:21:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode27633], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:48.718 request: 00:12:48.718 { 00:12:48.718 "method": "nvmf_create_subsystem", 00:12:48.718 "params": { 00:12:48.718 "nqn": "nqn.2016-06.io.spdk:cnode27633", 00:12:48.718 "model_number": "SPDK_Controller\u001f" 00:12:48.718 } 00:12:48.718 } 00:12:48.718 Got JSON-RPC error response 00:12:48.718 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:48.718 00:21:35 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:48.718 00:21:35 -- target/invalid.sh@19 -- # local length=21 ll 00:12:48.718 00:21:35 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:48.718 00:21:35 -- target/invalid.sh@21 -- # local chars 00:12:48.718 00:21:35 -- target/invalid.sh@22 -- # local string 00:12:48.718 00:21:35 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:48.718 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.718 00:21:35 -- target/invalid.sh@25 -- # printf %x 41 00:12:48.718 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:48.718 00:21:35 -- target/invalid.sh@25 -- # string+=')' 00:12:48.718 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.718 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.718 00:21:35 -- target/invalid.sh@25 -- # printf %x 88 00:12:48.718 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:48.718 00:21:35 -- target/invalid.sh@25 -- # string+=X 00:12:48.718 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.718 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.718 00:21:35 -- target/invalid.sh@25 -- # printf %x 83 00:12:48.718 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:48.718 00:21:35 -- target/invalid.sh@25 -- # string+=S 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 34 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+='"' 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 73 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=I 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 110 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=n 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 112 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=p 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 63 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+='?' 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 67 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=C 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 119 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=w 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 48 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=0 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 98 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=b 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 97 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=a 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 99 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=c 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 69 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=E 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 83 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=S 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 59 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=';' 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 68 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=D 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 71 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=G 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 120 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+=x 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # printf %x 60 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:48.719 00:21:35 -- target/invalid.sh@25 -- # string+='<' 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.719 00:21:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.719 00:21:35 -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:12:48.719 00:21:35 -- target/invalid.sh@31 -- # echo ')XS"Inp?Cw0bacES;DGx<' 00:12:48.719 00:21:35 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s ')XS"Inp?Cw0bacES;DGx<' nqn.2016-06.io.spdk:cnode16323 00:12:48.977 [2024-07-13 00:21:36.107115] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16323: invalid serial number ')XS"Inp?Cw0bacES;DGx<' 00:12:48.977 00:21:36 -- target/invalid.sh@54 -- # out='2024/07/13 00:21:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16323 serial_number:)XS"Inp?Cw0bacES;DGx<], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN )XS"Inp?Cw0bacES;DGx< 00:12:48.977 request: 00:12:48.977 { 00:12:48.977 "method": "nvmf_create_subsystem", 00:12:48.977 "params": { 00:12:48.977 "nqn": "nqn.2016-06.io.spdk:cnode16323", 00:12:48.977 "serial_number": ")XS\"Inp?Cw0bacES;DGx<" 00:12:48.977 } 00:12:48.977 } 00:12:48.977 Got JSON-RPC error response 00:12:48.977 GoRPCClient: error on JSON-RPC call' 00:12:48.977 00:21:36 -- target/invalid.sh@55 -- # [[ 2024/07/13 00:21:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16323 serial_number:)XS"Inp?Cw0bacES;DGx<], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN )XS"Inp?Cw0bacES;DGx< 00:12:48.977 request: 00:12:48.977 { 00:12:48.977 "method": "nvmf_create_subsystem", 00:12:48.977 "params": { 00:12:48.977 "nqn": "nqn.2016-06.io.spdk:cnode16323", 00:12:48.977 "serial_number": ")XS\"Inp?Cw0bacES;DGx<" 00:12:48.977 } 00:12:48.977 } 00:12:48.977 Got JSON-RPC error response 00:12:48.977 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:48.977 00:21:36 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:48.977 00:21:36 -- target/invalid.sh@19 -- # local length=41 ll 00:12:48.977 00:21:36 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:48.977 00:21:36 -- target/invalid.sh@21 -- # local chars 00:12:48.977 00:21:36 -- target/invalid.sh@22 -- # local string 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # printf %x 46 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # string+=. 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # printf %x 125 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # string+='}' 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # printf %x 113 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # string+=q 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # printf %x 124 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # string+='|' 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # printf %x 90 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # string+=Z 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # printf %x 123 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # string+='{' 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # printf %x 54 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:48.977 00:21:36 -- target/invalid.sh@25 -- # string+=6 00:12:48.977 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # printf %x 75 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # string+=K 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # printf %x 78 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # string+=N 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # printf %x 124 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # string+='|' 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # printf %x 117 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # string+=u 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # printf %x 84 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # string+=T 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # printf %x 90 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # string+=Z 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # printf %x 67 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # string+=C 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # printf %x 77 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # string+=M 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # printf %x 101 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:48.978 00:21:36 -- target/invalid.sh@25 -- # string+=e 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:48.978 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 117 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=u 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 39 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=\' 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 99 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=c 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 91 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+='[' 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 45 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=- 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 101 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=e 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 58 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=: 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 41 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=')' 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 41 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=')' 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 124 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+='|' 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 40 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+='(' 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 92 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+='\' 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 92 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+='\' 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 62 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+='>' 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 56 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=8 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 50 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=2 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 118 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=v 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 54 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=6 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 57 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=9 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 70 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=F 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 73 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=I 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 43 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=+ 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 73 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=I 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 43 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=+ 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # printf %x 39 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:49.236 00:21:36 -- target/invalid.sh@25 -- # string+=\' 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:49.236 00:21:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:49.236 00:21:36 -- target/invalid.sh@28 -- # [[ . == \- ]] 00:12:49.236 00:21:36 -- target/invalid.sh@31 -- # echo '.}q|Z{6KN|uTZCMeu'\''c[-e:))|(\\>82v69FI+I+'\''' 00:12:49.236 00:21:36 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '.}q|Z{6KN|uTZCMeu'\''c[-e:))|(\\>82v69FI+I+'\''' nqn.2016-06.io.spdk:cnode8210 00:12:49.493 [2024-07-13 00:21:36.579860] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8210: invalid model number '.}q|Z{6KN|uTZCMeu'c[-e:))|(\\>82v69FI+I+'' 00:12:49.493 00:21:36 -- target/invalid.sh@58 -- # out='2024/07/13 00:21:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:.}q|Z{6KN|uTZCMeu'\''c[-e:))|(\\>82v69FI+I+'\'' nqn:nqn.2016-06.io.spdk:cnode8210], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN .}q|Z{6KN|uTZCMeu'\''c[-e:))|(\\>82v69FI+I+'\'' 00:12:49.493 request: 00:12:49.493 { 00:12:49.493 "method": "nvmf_create_subsystem", 00:12:49.493 "params": { 00:12:49.493 "nqn": "nqn.2016-06.io.spdk:cnode8210", 00:12:49.493 "model_number": ".}q|Z{6KN|uTZCMeu'\''c[-e:))|(\\\\>82v69FI+I+'\''" 00:12:49.493 } 00:12:49.493 } 00:12:49.493 Got JSON-RPC error response 00:12:49.493 GoRPCClient: error on JSON-RPC call' 00:12:49.493 00:21:36 -- target/invalid.sh@59 -- # [[ 2024/07/13 00:21:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:.}q|Z{6KN|uTZCMeu'c[-e:))|(\\>82v69FI+I+' nqn:nqn.2016-06.io.spdk:cnode8210], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN .}q|Z{6KN|uTZCMeu'c[-e:))|(\\>82v69FI+I+' 00:12:49.493 request: 00:12:49.493 { 00:12:49.493 "method": "nvmf_create_subsystem", 00:12:49.493 "params": { 00:12:49.493 "nqn": "nqn.2016-06.io.spdk:cnode8210", 00:12:49.493 "model_number": ".}q|Z{6KN|uTZCMeu'c[-e:))|(\\\\>82v69FI+I+'" 00:12:49.493 } 00:12:49.493 } 00:12:49.493 Got JSON-RPC error response 00:12:49.493 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:49.493 00:21:36 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:49.751 [2024-07-13 00:21:36.864355] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.751 00:21:36 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:50.008 00:21:37 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:50.008 00:21:37 -- target/invalid.sh@67 -- # echo '' 00:12:50.008 00:21:37 -- target/invalid.sh@67 -- # head -n 1 00:12:50.008 00:21:37 -- target/invalid.sh@67 -- # IP= 00:12:50.008 00:21:37 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:50.267 [2024-07-13 00:21:37.454431] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:50.267 00:21:37 -- target/invalid.sh@69 -- # out='2024/07/13 00:21:37 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:50.267 request: 00:12:50.267 { 00:12:50.267 "method": "nvmf_subsystem_remove_listener", 00:12:50.267 "params": { 00:12:50.267 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:50.267 "listen_address": { 00:12:50.267 "trtype": "tcp", 00:12:50.267 "traddr": "", 00:12:50.267 "trsvcid": "4421" 00:12:50.267 } 00:12:50.267 } 00:12:50.267 } 00:12:50.267 Got JSON-RPC error response 00:12:50.267 GoRPCClient: error on JSON-RPC call' 00:12:50.267 00:21:37 -- target/invalid.sh@70 -- # [[ 2024/07/13 00:21:37 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:50.267 request: 00:12:50.267 { 00:12:50.267 "method": "nvmf_subsystem_remove_listener", 00:12:50.267 "params": { 00:12:50.267 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:50.267 "listen_address": { 00:12:50.267 "trtype": "tcp", 00:12:50.267 "traddr": "", 00:12:50.267 "trsvcid": "4421" 00:12:50.267 } 00:12:50.267 } 00:12:50.267 } 00:12:50.267 Got JSON-RPC error response 00:12:50.267 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:50.267 00:21:37 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3434 -i 0 00:12:50.525 [2024-07-13 00:21:37.738915] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3434: invalid cntlid range [0-65519] 00:12:50.782 00:21:37 -- target/invalid.sh@73 -- # out='2024/07/13 00:21:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3434], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:50.782 request: 00:12:50.782 { 00:12:50.782 "method": "nvmf_create_subsystem", 00:12:50.782 "params": { 00:12:50.782 "nqn": "nqn.2016-06.io.spdk:cnode3434", 00:12:50.782 "min_cntlid": 0 00:12:50.782 } 00:12:50.782 } 00:12:50.782 Got JSON-RPC error response 00:12:50.782 GoRPCClient: error on JSON-RPC call' 00:12:50.782 00:21:37 -- target/invalid.sh@74 -- # [[ 2024/07/13 00:21:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3434], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:50.782 request: 00:12:50.782 { 00:12:50.782 "method": "nvmf_create_subsystem", 00:12:50.782 "params": { 00:12:50.782 "nqn": "nqn.2016-06.io.spdk:cnode3434", 00:12:50.782 "min_cntlid": 0 00:12:50.782 } 00:12:50.782 } 00:12:50.782 Got JSON-RPC error response 00:12:50.782 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:50.782 00:21:37 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7348 -i 65520 00:12:50.782 [2024-07-13 00:21:38.011270] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7348: invalid cntlid range [65520-65519] 00:12:51.039 00:21:38 -- target/invalid.sh@75 -- # out='2024/07/13 00:21:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode7348], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:51.039 request: 00:12:51.039 { 00:12:51.039 "method": "nvmf_create_subsystem", 00:12:51.039 "params": { 00:12:51.039 "nqn": "nqn.2016-06.io.spdk:cnode7348", 00:12:51.039 "min_cntlid": 65520 00:12:51.039 } 00:12:51.039 } 00:12:51.039 Got JSON-RPC error response 00:12:51.039 GoRPCClient: error on JSON-RPC call' 00:12:51.039 00:21:38 -- target/invalid.sh@76 -- # [[ 2024/07/13 00:21:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode7348], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:51.039 request: 00:12:51.039 { 00:12:51.039 "method": "nvmf_create_subsystem", 00:12:51.039 "params": { 00:12:51.039 "nqn": "nqn.2016-06.io.spdk:cnode7348", 00:12:51.039 "min_cntlid": 65520 00:12:51.039 } 00:12:51.039 } 00:12:51.039 Got JSON-RPC error response 00:12:51.039 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:51.039 00:21:38 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23496 -I 0 00:12:51.039 [2024-07-13 00:21:38.243595] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23496: invalid cntlid range [1-0] 00:12:51.039 00:21:38 -- target/invalid.sh@77 -- # out='2024/07/13 00:21:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode23496], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:51.039 request: 00:12:51.039 { 00:12:51.039 "method": "nvmf_create_subsystem", 00:12:51.039 "params": { 00:12:51.039 "nqn": "nqn.2016-06.io.spdk:cnode23496", 00:12:51.039 "max_cntlid": 0 00:12:51.039 } 00:12:51.039 } 00:12:51.039 Got JSON-RPC error response 00:12:51.039 GoRPCClient: error on JSON-RPC call' 00:12:51.039 00:21:38 -- target/invalid.sh@78 -- # [[ 2024/07/13 00:21:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode23496], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:51.039 request: 00:12:51.039 { 00:12:51.039 "method": "nvmf_create_subsystem", 00:12:51.039 "params": { 00:12:51.039 "nqn": "nqn.2016-06.io.spdk:cnode23496", 00:12:51.039 "max_cntlid": 0 00:12:51.039 } 00:12:51.039 } 00:12:51.039 Got JSON-RPC error response 00:12:51.040 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:51.297 00:21:38 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11713 -I 65520 00:12:51.297 [2024-07-13 00:21:38.487930] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11713: invalid cntlid range [1-65520] 00:12:51.297 00:21:38 -- target/invalid.sh@79 -- # out='2024/07/13 00:21:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11713], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:51.297 request: 00:12:51.297 { 00:12:51.297 "method": "nvmf_create_subsystem", 00:12:51.297 "params": { 00:12:51.297 "nqn": "nqn.2016-06.io.spdk:cnode11713", 00:12:51.297 "max_cntlid": 65520 00:12:51.297 } 00:12:51.297 } 00:12:51.297 Got JSON-RPC error response 00:12:51.297 GoRPCClient: error on JSON-RPC call' 00:12:51.297 00:21:38 -- target/invalid.sh@80 -- # [[ 2024/07/13 00:21:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11713], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:51.297 request: 00:12:51.297 { 00:12:51.297 "method": "nvmf_create_subsystem", 00:12:51.297 "params": { 00:12:51.297 "nqn": "nqn.2016-06.io.spdk:cnode11713", 00:12:51.297 "max_cntlid": 65520 00:12:51.297 } 00:12:51.297 } 00:12:51.297 Got JSON-RPC error response 00:12:51.297 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:51.297 00:21:38 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23371 -i 6 -I 5 00:12:51.555 [2024-07-13 00:21:38.716296] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23371: invalid cntlid range [6-5] 00:12:51.555 00:21:38 -- target/invalid.sh@83 -- # out='2024/07/13 00:21:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode23371], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:51.555 request: 00:12:51.555 { 00:12:51.555 "method": "nvmf_create_subsystem", 00:12:51.555 "params": { 00:12:51.555 "nqn": "nqn.2016-06.io.spdk:cnode23371", 00:12:51.555 "min_cntlid": 6, 00:12:51.555 "max_cntlid": 5 00:12:51.555 } 00:12:51.555 } 00:12:51.555 Got JSON-RPC error response 00:12:51.555 GoRPCClient: error on JSON-RPC call' 00:12:51.555 00:21:38 -- target/invalid.sh@84 -- # [[ 2024/07/13 00:21:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode23371], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:51.555 request: 00:12:51.555 { 00:12:51.555 "method": "nvmf_create_subsystem", 00:12:51.555 "params": { 00:12:51.555 "nqn": "nqn.2016-06.io.spdk:cnode23371", 00:12:51.555 "min_cntlid": 6, 00:12:51.555 "max_cntlid": 5 00:12:51.555 } 00:12:51.555 } 00:12:51.555 Got JSON-RPC error response 00:12:51.555 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:51.555 00:21:38 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:51.813 00:21:38 -- target/invalid.sh@87 -- # out='request: 00:12:51.813 { 00:12:51.813 "name": "foobar", 00:12:51.813 "method": "nvmf_delete_target", 00:12:51.813 "req_id": 1 00:12:51.813 } 00:12:51.813 Got JSON-RPC error response 00:12:51.813 response: 00:12:51.813 { 00:12:51.813 "code": -32602, 00:12:51.813 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:51.813 }' 00:12:51.813 00:21:38 -- target/invalid.sh@88 -- # [[ request: 00:12:51.813 { 00:12:51.813 "name": "foobar", 00:12:51.813 "method": "nvmf_delete_target", 00:12:51.813 "req_id": 1 00:12:51.813 } 00:12:51.813 Got JSON-RPC error response 00:12:51.813 response: 00:12:51.813 { 00:12:51.813 "code": -32602, 00:12:51.813 "message": "The specified target doesn't exist, cannot delete it." 00:12:51.813 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:51.813 00:21:38 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:51.813 00:21:38 -- target/invalid.sh@91 -- # nvmftestfini 00:12:51.813 00:21:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:51.813 00:21:38 -- nvmf/common.sh@116 -- # sync 00:12:51.813 00:21:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:51.813 00:21:38 -- nvmf/common.sh@119 -- # set +e 00:12:51.813 00:21:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:51.813 00:21:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:51.813 rmmod nvme_tcp 00:12:51.813 rmmod nvme_fabrics 00:12:51.813 rmmod nvme_keyring 00:12:51.813 00:21:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:51.813 00:21:38 -- nvmf/common.sh@123 -- # set -e 00:12:51.813 00:21:38 -- nvmf/common.sh@124 -- # return 0 00:12:51.813 00:21:38 -- nvmf/common.sh@477 -- # '[' -n 78149 ']' 00:12:51.813 00:21:38 -- nvmf/common.sh@478 -- # killprocess 78149 00:12:51.813 00:21:38 -- common/autotest_common.sh@926 -- # '[' -z 78149 ']' 00:12:51.813 00:21:38 -- common/autotest_common.sh@930 -- # kill -0 78149 00:12:51.813 00:21:38 -- common/autotest_common.sh@931 -- # uname 00:12:51.813 00:21:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:51.813 00:21:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78149 00:12:51.813 killing process with pid 78149 00:12:51.813 00:21:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:51.813 00:21:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:51.813 00:21:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78149' 00:12:51.813 00:21:38 -- common/autotest_common.sh@945 -- # kill 78149 00:12:51.813 00:21:38 -- common/autotest_common.sh@950 -- # wait 78149 00:12:52.071 00:21:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:52.071 00:21:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:52.071 00:21:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:52.071 00:21:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.071 00:21:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:52.071 00:21:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.071 00:21:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.071 00:21:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.071 00:21:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:52.071 00:12:52.071 real 0m5.920s 00:12:52.071 user 0m23.694s 00:12:52.071 sys 0m1.270s 00:12:52.071 00:21:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.071 00:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:52.071 ************************************ 00:12:52.071 END TEST nvmf_invalid 00:12:52.071 ************************************ 00:12:52.330 00:21:39 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:52.330 00:21:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:52.330 00:21:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:52.330 00:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:52.330 ************************************ 00:12:52.330 START TEST nvmf_abort 00:12:52.330 ************************************ 00:12:52.330 00:21:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:52.330 * Looking for test storage... 00:12:52.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:52.330 00:21:39 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:52.330 00:21:39 -- nvmf/common.sh@7 -- # uname -s 00:12:52.330 00:21:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.330 00:21:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.330 00:21:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.330 00:21:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.330 00:21:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.330 00:21:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.330 00:21:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.330 00:21:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.330 00:21:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.330 00:21:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.330 00:21:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:12:52.330 00:21:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:12:52.330 00:21:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.330 00:21:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.330 00:21:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:52.330 00:21:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:52.330 00:21:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.330 00:21:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.330 00:21:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.330 00:21:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.330 00:21:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.330 00:21:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.330 00:21:39 -- paths/export.sh@5 -- # export PATH 00:12:52.330 00:21:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.330 00:21:39 -- nvmf/common.sh@46 -- # : 0 00:12:52.330 00:21:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:52.330 00:21:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:52.330 00:21:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:52.330 00:21:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.330 00:21:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.330 00:21:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:52.330 00:21:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:52.330 00:21:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:52.330 00:21:39 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.330 00:21:39 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:52.330 00:21:39 -- target/abort.sh@14 -- # nvmftestinit 00:12:52.330 00:21:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:52.330 00:21:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.330 00:21:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:52.330 00:21:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:52.330 00:21:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:52.330 00:21:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.330 00:21:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.330 00:21:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.330 00:21:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:52.330 00:21:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:52.330 00:21:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:52.330 00:21:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:52.330 00:21:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:52.330 00:21:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:52.330 00:21:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.330 00:21:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.330 00:21:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:52.330 00:21:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:52.330 00:21:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:52.330 00:21:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:52.330 00:21:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:52.330 00:21:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.330 00:21:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:52.330 00:21:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:52.330 00:21:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:52.330 00:21:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:52.330 00:21:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:52.330 00:21:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:52.330 Cannot find device "nvmf_tgt_br" 00:12:52.330 00:21:39 -- nvmf/common.sh@154 -- # true 00:12:52.330 00:21:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:52.330 Cannot find device "nvmf_tgt_br2" 00:12:52.330 00:21:39 -- nvmf/common.sh@155 -- # true 00:12:52.330 00:21:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:52.330 00:21:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:52.330 Cannot find device "nvmf_tgt_br" 00:12:52.330 00:21:39 -- nvmf/common.sh@157 -- # true 00:12:52.330 00:21:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:52.330 Cannot find device "nvmf_tgt_br2" 00:12:52.330 00:21:39 -- nvmf/common.sh@158 -- # true 00:12:52.330 00:21:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:52.589 00:21:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:52.589 00:21:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:52.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.589 00:21:39 -- nvmf/common.sh@161 -- # true 00:12:52.589 00:21:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:52.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.589 00:21:39 -- nvmf/common.sh@162 -- # true 00:12:52.589 00:21:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:52.589 00:21:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:52.589 00:21:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:52.589 00:21:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:52.589 00:21:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:52.589 00:21:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:52.589 00:21:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:52.589 00:21:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:52.589 00:21:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:52.589 00:21:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:52.589 00:21:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:52.589 00:21:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:52.589 00:21:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:52.589 00:21:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:52.589 00:21:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:52.589 00:21:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:52.589 00:21:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:52.589 00:21:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:52.589 00:21:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:52.589 00:21:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:52.589 00:21:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:52.589 00:21:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:52.589 00:21:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:52.589 00:21:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:52.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:12:52.589 00:12:52.589 --- 10.0.0.2 ping statistics --- 00:12:52.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.589 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:52.589 00:21:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:52.589 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:52.589 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:12:52.589 00:12:52.589 --- 10.0.0.3 ping statistics --- 00:12:52.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.589 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:52.589 00:21:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:52.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:52.589 00:12:52.589 --- 10.0.0.1 ping statistics --- 00:12:52.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.589 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:52.589 00:21:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.589 00:21:39 -- nvmf/common.sh@421 -- # return 0 00:12:52.589 00:21:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:52.589 00:21:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.589 00:21:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:52.589 00:21:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:52.589 00:21:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.589 00:21:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:52.589 00:21:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:52.848 00:21:39 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:52.848 00:21:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:52.848 00:21:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:52.848 00:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:52.848 00:21:39 -- nvmf/common.sh@469 -- # nvmfpid=78667 00:12:52.848 00:21:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:52.848 00:21:39 -- nvmf/common.sh@470 -- # waitforlisten 78667 00:12:52.848 00:21:39 -- common/autotest_common.sh@819 -- # '[' -z 78667 ']' 00:12:52.848 00:21:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.848 00:21:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:52.848 00:21:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.848 00:21:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:52.848 00:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:52.848 [2024-07-13 00:21:39.896876] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:52.848 [2024-07-13 00:21:39.896981] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.848 [2024-07-13 00:21:40.043192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.105 [2024-07-13 00:21:40.136746] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:53.105 [2024-07-13 00:21:40.136937] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.105 [2024-07-13 00:21:40.136952] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.105 [2024-07-13 00:21:40.136963] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.105 [2024-07-13 00:21:40.137136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.105 [2024-07-13 00:21:40.137262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.105 [2024-07-13 00:21:40.137269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.670 00:21:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:53.670 00:21:40 -- common/autotest_common.sh@852 -- # return 0 00:12:53.670 00:21:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:53.670 00:21:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:53.670 00:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:53.670 00:21:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.670 00:21:40 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:53.670 00:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.670 00:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:53.670 [2024-07-13 00:21:40.897541] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.926 00:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.926 00:21:40 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:53.926 00:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.926 00:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:53.926 Malloc0 00:12:53.926 00:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.926 00:21:40 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:53.926 00:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.926 00:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:53.926 Delay0 00:12:53.926 00:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.926 00:21:40 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:53.926 00:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.926 00:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:53.926 00:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.926 00:21:40 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:53.926 00:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.926 00:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:53.926 00:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.926 00:21:40 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:53.926 00:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.926 00:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:53.926 [2024-07-13 00:21:40.972398] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.926 00:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.926 00:21:40 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:53.926 00:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.926 00:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:53.926 00:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.926 00:21:40 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:53.926 [2024-07-13 00:21:41.153162] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:56.481 Initializing NVMe Controllers 00:12:56.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:56.481 controller IO queue size 128 less than required 00:12:56.481 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:56.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:56.481 Initialization complete. Launching workers. 00:12:56.481 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34763 00:12:56.481 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34828, failed to submit 62 00:12:56.481 success 34763, unsuccess 65, failed 0 00:12:56.481 00:21:43 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:56.481 00:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.481 00:21:43 -- common/autotest_common.sh@10 -- # set +x 00:12:56.481 00:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.481 00:21:43 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:56.481 00:21:43 -- target/abort.sh@38 -- # nvmftestfini 00:12:56.481 00:21:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:56.481 00:21:43 -- nvmf/common.sh@116 -- # sync 00:12:56.481 00:21:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:56.481 00:21:43 -- nvmf/common.sh@119 -- # set +e 00:12:56.481 00:21:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:56.481 00:21:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:56.481 rmmod nvme_tcp 00:12:56.481 rmmod nvme_fabrics 00:12:56.481 rmmod nvme_keyring 00:12:56.481 00:21:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:56.481 00:21:43 -- nvmf/common.sh@123 -- # set -e 00:12:56.481 00:21:43 -- nvmf/common.sh@124 -- # return 0 00:12:56.481 00:21:43 -- nvmf/common.sh@477 -- # '[' -n 78667 ']' 00:12:56.481 00:21:43 -- nvmf/common.sh@478 -- # killprocess 78667 00:12:56.481 00:21:43 -- common/autotest_common.sh@926 -- # '[' -z 78667 ']' 00:12:56.481 00:21:43 -- common/autotest_common.sh@930 -- # kill -0 78667 00:12:56.481 00:21:43 -- common/autotest_common.sh@931 -- # uname 00:12:56.481 00:21:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:56.481 00:21:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78667 00:12:56.481 killing process with pid 78667 00:12:56.481 00:21:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:56.481 00:21:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:56.481 00:21:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78667' 00:12:56.481 00:21:43 -- common/autotest_common.sh@945 -- # kill 78667 00:12:56.481 00:21:43 -- common/autotest_common.sh@950 -- # wait 78667 00:12:56.481 00:21:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:56.481 00:21:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:56.481 00:21:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:56.481 00:21:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.481 00:21:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:56.481 00:21:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.481 00:21:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.481 00:21:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.481 00:21:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:56.481 00:12:56.481 real 0m4.288s 00:12:56.481 user 0m12.286s 00:12:56.481 sys 0m1.055s 00:12:56.481 00:21:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.481 00:21:43 -- common/autotest_common.sh@10 -- # set +x 00:12:56.481 ************************************ 00:12:56.481 END TEST nvmf_abort 00:12:56.481 ************************************ 00:12:56.481 00:21:43 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:56.481 00:21:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:56.481 00:21:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:56.481 00:21:43 -- common/autotest_common.sh@10 -- # set +x 00:12:56.481 ************************************ 00:12:56.481 START TEST nvmf_ns_hotplug_stress 00:12:56.481 ************************************ 00:12:56.481 00:21:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:56.739 * Looking for test storage... 00:12:56.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:56.739 00:21:43 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:56.739 00:21:43 -- nvmf/common.sh@7 -- # uname -s 00:12:56.739 00:21:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.739 00:21:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.739 00:21:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.739 00:21:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.739 00:21:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.739 00:21:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.739 00:21:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.739 00:21:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.739 00:21:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.739 00:21:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.739 00:21:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:12:56.739 00:21:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:12:56.739 00:21:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.739 00:21:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.739 00:21:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:56.739 00:21:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:56.739 00:21:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.739 00:21:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.739 00:21:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.739 00:21:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.739 00:21:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.739 00:21:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.739 00:21:43 -- paths/export.sh@5 -- # export PATH 00:12:56.739 00:21:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.739 00:21:43 -- nvmf/common.sh@46 -- # : 0 00:12:56.739 00:21:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:56.739 00:21:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:56.739 00:21:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:56.739 00:21:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.739 00:21:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.739 00:21:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:56.739 00:21:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:56.739 00:21:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:56.739 00:21:43 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:56.739 00:21:43 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:56.739 00:21:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:56.739 00:21:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.739 00:21:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:56.739 00:21:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:56.739 00:21:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:56.739 00:21:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.739 00:21:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.739 00:21:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.739 00:21:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:56.739 00:21:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:56.739 00:21:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:56.739 00:21:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:56.739 00:21:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:56.739 00:21:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:56.739 00:21:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.739 00:21:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.739 00:21:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:56.739 00:21:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:56.739 00:21:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:56.739 00:21:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:56.739 00:21:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:56.739 00:21:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.739 00:21:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:56.739 00:21:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:56.739 00:21:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:56.739 00:21:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:56.739 00:21:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:56.739 00:21:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:56.739 Cannot find device "nvmf_tgt_br" 00:12:56.739 00:21:43 -- nvmf/common.sh@154 -- # true 00:12:56.739 00:21:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:56.739 Cannot find device "nvmf_tgt_br2" 00:12:56.739 00:21:43 -- nvmf/common.sh@155 -- # true 00:12:56.739 00:21:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:56.739 00:21:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:56.739 Cannot find device "nvmf_tgt_br" 00:12:56.739 00:21:43 -- nvmf/common.sh@157 -- # true 00:12:56.739 00:21:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:56.739 Cannot find device "nvmf_tgt_br2" 00:12:56.739 00:21:43 -- nvmf/common.sh@158 -- # true 00:12:56.739 00:21:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:56.739 00:21:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:56.739 00:21:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:56.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.739 00:21:43 -- nvmf/common.sh@161 -- # true 00:12:56.739 00:21:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:56.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:56.739 00:21:43 -- nvmf/common.sh@162 -- # true 00:12:56.739 00:21:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:56.739 00:21:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:56.739 00:21:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:56.739 00:21:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:56.739 00:21:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:56.997 00:21:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:56.997 00:21:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:56.997 00:21:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:56.997 00:21:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:56.997 00:21:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:56.997 00:21:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:56.997 00:21:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:56.997 00:21:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:56.997 00:21:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:56.997 00:21:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:56.997 00:21:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:56.997 00:21:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:56.997 00:21:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:56.997 00:21:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:56.997 00:21:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:56.997 00:21:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:56.997 00:21:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:56.997 00:21:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:56.997 00:21:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:56.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:12:56.997 00:12:56.997 --- 10.0.0.2 ping statistics --- 00:12:56.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.997 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:12:56.997 00:21:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:56.997 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:56.997 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:12:56.997 00:12:56.997 --- 10.0.0.3 ping statistics --- 00:12:56.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.997 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:56.997 00:21:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:56.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:12:56.997 00:12:56.997 --- 10.0.0.1 ping statistics --- 00:12:56.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.998 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:56.998 00:21:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.998 00:21:44 -- nvmf/common.sh@421 -- # return 0 00:12:56.998 00:21:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:56.998 00:21:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.998 00:21:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:56.998 00:21:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:56.998 00:21:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.998 00:21:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:56.998 00:21:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:56.998 00:21:44 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:56.998 00:21:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:56.998 00:21:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:56.998 00:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:56.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.998 00:21:44 -- nvmf/common.sh@469 -- # nvmfpid=78928 00:12:56.998 00:21:44 -- nvmf/common.sh@470 -- # waitforlisten 78928 00:12:56.998 00:21:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:56.998 00:21:44 -- common/autotest_common.sh@819 -- # '[' -z 78928 ']' 00:12:56.998 00:21:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.998 00:21:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:56.998 00:21:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.998 00:21:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:56.998 00:21:44 -- common/autotest_common.sh@10 -- # set +x 00:12:56.998 [2024-07-13 00:21:44.198718] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:56.998 [2024-07-13 00:21:44.198813] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.255 [2024-07-13 00:21:44.343640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:57.255 [2024-07-13 00:21:44.438135] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:57.255 [2024-07-13 00:21:44.438663] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.255 [2024-07-13 00:21:44.438850] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.255 [2024-07-13 00:21:44.439027] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.255 [2024-07-13 00:21:44.439286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.255 [2024-07-13 00:21:44.439389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.255 [2024-07-13 00:21:44.439407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.188 00:21:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:58.188 00:21:45 -- common/autotest_common.sh@852 -- # return 0 00:12:58.188 00:21:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:58.188 00:21:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:58.188 00:21:45 -- common/autotest_common.sh@10 -- # set +x 00:12:58.188 00:21:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.188 00:21:45 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:58.188 00:21:45 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:58.188 [2024-07-13 00:21:45.369401] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.188 00:21:45 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:58.446 00:21:45 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.705 [2024-07-13 00:21:45.810315] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.705 00:21:45 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:58.964 00:21:46 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:59.222 Malloc0 00:12:59.222 00:21:46 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:59.481 Delay0 00:12:59.481 00:21:46 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.739 00:21:46 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:59.998 NULL1 00:12:59.998 00:21:47 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:00.256 00:21:47 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79066 00:13:00.256 00:21:47 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:00.256 00:21:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:00.256 00:21:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.633 Read completed with error (sct=0, sc=11) 00:13:01.633 00:21:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.891 00:21:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:01.891 00:21:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:02.149 true 00:13:02.149 00:21:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:02.149 00:21:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.717 00:21:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.000 00:21:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:03.000 00:21:50 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:03.258 true 00:13:03.258 00:21:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:03.258 00:21:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.516 00:21:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.774 00:21:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:03.774 00:21:50 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:04.032 true 00:13:04.032 00:21:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:04.032 00:21:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.965 00:21:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.965 00:21:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:04.965 00:21:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:05.223 true 00:13:05.223 00:21:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:05.223 00:21:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.482 00:21:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.740 00:21:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:05.740 00:21:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:06.002 true 00:13:06.002 00:21:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:06.002 00:21:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.937 00:21:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.196 00:21:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:07.196 00:21:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:07.196 true 00:13:07.196 00:21:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:07.196 00:21:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.455 00:21:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.712 00:21:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:07.712 00:21:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:07.971 true 00:13:07.971 00:21:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:07.971 00:21:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.907 00:21:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.166 00:21:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:09.166 00:21:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:09.166 true 00:13:09.166 00:21:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:09.166 00:21:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.424 00:21:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.682 00:21:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:09.682 00:21:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:09.948 true 00:13:09.948 00:21:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:09.948 00:21:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.897 00:21:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.155 00:21:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:11.155 00:21:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:11.414 true 00:13:11.414 00:21:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:11.414 00:21:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.414 00:21:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.672 00:21:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:11.672 00:21:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:11.929 true 00:13:11.929 00:21:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:11.929 00:21:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.861 00:21:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.119 00:22:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:13.119 00:22:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:13.377 true 00:13:13.377 00:22:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:13.377 00:22:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.377 00:22:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.634 00:22:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:13.634 00:22:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:13.892 true 00:13:13.892 00:22:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:13.892 00:22:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.828 00:22:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.086 00:22:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:15.086 00:22:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:15.344 true 00:13:15.344 00:22:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:15.344 00:22:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.602 00:22:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.860 00:22:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:15.860 00:22:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:16.117 true 00:13:16.117 00:22:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:16.117 00:22:03 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.051 00:22:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.051 00:22:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:17.051 00:22:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:17.308 true 00:13:17.308 00:22:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:17.308 00:22:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.566 00:22:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.824 00:22:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:17.824 00:22:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:18.082 true 00:13:18.082 00:22:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:18.082 00:22:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.015 00:22:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.273 00:22:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:19.273 00:22:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:19.273 true 00:13:19.273 00:22:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:19.273 00:22:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.531 00:22:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.789 00:22:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:19.789 00:22:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:20.047 true 00:13:20.047 00:22:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:20.047 00:22:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.980 00:22:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.236 00:22:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:21.236 00:22:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:21.236 true 00:13:21.493 00:22:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:21.493 00:22:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.493 00:22:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.751 00:22:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:21.751 00:22:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:22.009 true 00:13:22.009 00:22:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:22.009 00:22:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.944 00:22:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.201 00:22:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:23.201 00:22:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:23.459 true 00:13:23.459 00:22:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:23.459 00:22:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.716 00:22:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.716 00:22:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:23.716 00:22:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:23.974 true 00:13:23.974 00:22:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:23.974 00:22:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.910 00:22:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.169 00:22:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:25.169 00:22:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:25.427 true 00:13:25.427 00:22:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:25.427 00:22:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.686 00:22:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.944 00:22:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:25.944 00:22:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:26.203 true 00:13:26.203 00:22:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:26.203 00:22:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.139 00:22:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.139 00:22:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:27.139 00:22:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:27.397 true 00:13:27.397 00:22:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:27.397 00:22:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.656 00:22:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.915 00:22:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:27.915 00:22:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:28.173 true 00:13:28.174 00:22:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:28.174 00:22:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.110 00:22:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.110 00:22:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:29.110 00:22:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:29.369 true 00:13:29.369 00:22:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:29.369 00:22:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.627 00:22:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.886 00:22:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:29.886 00:22:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:29.886 true 00:13:29.886 00:22:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:29.886 00:22:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.265 Initializing NVMe Controllers 00:13:31.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:31.265 Controller IO queue size 128, less than required. 00:13:31.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:31.265 Controller IO queue size 128, less than required. 00:13:31.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:31.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:31.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:31.265 Initialization complete. Launching workers. 00:13:31.265 ======================================================== 00:13:31.265 Latency(us) 00:13:31.265 Device Information : IOPS MiB/s Average min max 00:13:31.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 268.30 0.13 273302.60 3807.40 1101369.84 00:13:31.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13125.47 6.41 9751.67 2356.14 520728.24 00:13:31.265 ======================================================== 00:13:31.265 Total : 13393.77 6.54 15031.02 2356.14 1101369.84 00:13:31.265 00:13:31.265 00:22:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.265 00:22:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:31.265 00:22:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:31.265 true 00:13:31.265 00:22:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79066 00:13:31.265 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79066) - No such process 00:13:31.265 00:22:18 -- target/ns_hotplug_stress.sh@53 -- # wait 79066 00:13:31.265 00:22:18 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.524 00:22:18 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.782 00:22:18 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:31.782 00:22:18 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:31.782 00:22:18 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:31.782 00:22:18 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:31.782 00:22:18 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:32.041 null0 00:13:32.041 00:22:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:32.041 00:22:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:32.041 00:22:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:32.041 null1 00:13:32.300 00:22:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:32.300 00:22:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:32.300 00:22:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:32.300 null2 00:13:32.300 00:22:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:32.300 00:22:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:32.300 00:22:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:32.559 null3 00:13:32.559 00:22:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:32.559 00:22:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:32.559 00:22:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:32.818 null4 00:13:32.818 00:22:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:32.818 00:22:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:32.819 00:22:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:33.077 null5 00:13:33.077 00:22:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:33.077 00:22:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:33.077 00:22:20 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:33.336 null6 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:33.336 null7 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.336 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@66 -- # wait 80110 80112 80114 80116 80117 80119 80122 80123 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.337 00:22:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:33.596 00:22:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:33.596 00:22:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:33.855 00:22:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:33.855 00:22:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.855 00:22:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:33.855 00:22:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.855 00:22:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:33.855 00:22:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:33.855 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.855 00:22:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.855 00:22:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:33.855 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:34.114 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.372 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:34.372 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.372 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:34.372 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:34.372 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.373 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:34.632 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:34.891 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:34.891 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.891 00:22:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:34.891 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.891 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.891 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:34.891 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.891 00:22:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.891 00:22:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:34.891 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:34.891 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.891 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.891 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:35.150 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.408 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.409 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:35.409 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.409 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.409 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:35.409 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.667 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:35.925 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:35.925 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.925 00:22:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.925 00:22:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:35.925 00:22:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.925 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:36.183 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.440 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:36.698 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:36.956 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.956 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.956 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:36.956 00:22:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:36.956 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.956 00:22:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.956 00:22:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:36.956 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.956 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.956 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:36.956 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.956 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.956 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:36.956 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.956 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.956 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:36.956 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:36.956 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.214 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.215 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:37.472 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.730 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:37.988 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.988 00:22:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.988 00:22:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:37.988 00:22:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.245 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.503 00:22:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.503 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.503 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.503 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.503 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.503 00:22:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:38.503 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.503 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.503 00:22:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:38.503 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.503 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.761 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.761 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.761 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.761 00:22:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.761 00:22:25 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:38.761 00:22:25 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:38.761 00:22:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:38.761 00:22:25 -- nvmf/common.sh@116 -- # sync 00:13:38.761 00:22:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:38.761 00:22:25 -- nvmf/common.sh@119 -- # set +e 00:13:38.761 00:22:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:38.761 00:22:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:38.761 rmmod nvme_tcp 00:13:38.761 rmmod nvme_fabrics 00:13:38.761 rmmod nvme_keyring 00:13:38.761 00:22:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:38.761 00:22:25 -- nvmf/common.sh@123 -- # set -e 00:13:38.761 00:22:25 -- nvmf/common.sh@124 -- # return 0 00:13:38.761 00:22:25 -- nvmf/common.sh@477 -- # '[' -n 78928 ']' 00:13:38.761 00:22:25 -- nvmf/common.sh@478 -- # killprocess 78928 00:13:38.761 00:22:25 -- common/autotest_common.sh@926 -- # '[' -z 78928 ']' 00:13:38.761 00:22:25 -- common/autotest_common.sh@930 -- # kill -0 78928 00:13:38.761 00:22:25 -- common/autotest_common.sh@931 -- # uname 00:13:38.761 00:22:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:38.761 00:22:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78928 00:13:38.761 00:22:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:38.761 00:22:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:38.761 00:22:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78928' 00:13:38.761 killing process with pid 78928 00:13:38.761 00:22:25 -- common/autotest_common.sh@945 -- # kill 78928 00:13:38.761 00:22:25 -- common/autotest_common.sh@950 -- # wait 78928 00:13:39.020 00:22:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:39.020 00:22:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:39.020 00:22:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:39.020 00:22:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.020 00:22:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:39.020 00:22:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.020 00:22:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.020 00:22:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.277 00:22:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:39.277 00:13:39.277 real 0m42.571s 00:13:39.277 user 3m20.507s 00:13:39.277 sys 0m12.874s 00:13:39.277 00:22:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.277 00:22:26 -- common/autotest_common.sh@10 -- # set +x 00:13:39.277 ************************************ 00:13:39.277 END TEST nvmf_ns_hotplug_stress 00:13:39.277 ************************************ 00:13:39.277 00:22:26 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:39.277 00:22:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:39.277 00:22:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:39.277 00:22:26 -- common/autotest_common.sh@10 -- # set +x 00:13:39.277 ************************************ 00:13:39.277 START TEST nvmf_connect_stress 00:13:39.277 ************************************ 00:13:39.277 00:22:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:39.277 * Looking for test storage... 00:13:39.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.277 00:22:26 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:39.277 00:22:26 -- nvmf/common.sh@7 -- # uname -s 00:13:39.277 00:22:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.277 00:22:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.277 00:22:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.277 00:22:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.277 00:22:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.277 00:22:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.277 00:22:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.277 00:22:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.277 00:22:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.277 00:22:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.277 00:22:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:13:39.277 00:22:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:13:39.277 00:22:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.277 00:22:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.277 00:22:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:39.277 00:22:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:39.277 00:22:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.277 00:22:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.277 00:22:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.277 00:22:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.277 00:22:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.277 00:22:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.277 00:22:26 -- paths/export.sh@5 -- # export PATH 00:13:39.277 00:22:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.277 00:22:26 -- nvmf/common.sh@46 -- # : 0 00:13:39.277 00:22:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:39.277 00:22:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:39.277 00:22:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:39.277 00:22:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.277 00:22:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.277 00:22:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:39.278 00:22:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:39.278 00:22:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:39.278 00:22:26 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:39.278 00:22:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:39.278 00:22:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.278 00:22:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:39.278 00:22:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:39.278 00:22:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:39.278 00:22:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.278 00:22:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.278 00:22:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.278 00:22:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:39.278 00:22:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:39.278 00:22:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:39.278 00:22:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:39.278 00:22:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:39.278 00:22:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:39.278 00:22:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.278 00:22:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.278 00:22:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:39.278 00:22:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:39.278 00:22:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:39.278 00:22:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:39.278 00:22:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:39.278 00:22:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.278 00:22:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:39.278 00:22:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:39.278 00:22:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:39.278 00:22:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:39.278 00:22:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:39.278 00:22:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:39.278 Cannot find device "nvmf_tgt_br" 00:13:39.278 00:22:26 -- nvmf/common.sh@154 -- # true 00:13:39.278 00:22:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:39.278 Cannot find device "nvmf_tgt_br2" 00:13:39.278 00:22:26 -- nvmf/common.sh@155 -- # true 00:13:39.278 00:22:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:39.278 00:22:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:39.278 Cannot find device "nvmf_tgt_br" 00:13:39.278 00:22:26 -- nvmf/common.sh@157 -- # true 00:13:39.278 00:22:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:39.278 Cannot find device "nvmf_tgt_br2" 00:13:39.278 00:22:26 -- nvmf/common.sh@158 -- # true 00:13:39.278 00:22:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:39.535 00:22:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:39.535 00:22:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:39.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.535 00:22:26 -- nvmf/common.sh@161 -- # true 00:13:39.535 00:22:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:39.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.535 00:22:26 -- nvmf/common.sh@162 -- # true 00:13:39.535 00:22:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:39.535 00:22:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:39.535 00:22:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:39.535 00:22:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:39.535 00:22:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:39.535 00:22:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:39.535 00:22:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:39.535 00:22:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:39.535 00:22:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:39.535 00:22:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:39.535 00:22:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:39.535 00:22:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:39.535 00:22:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:39.535 00:22:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:39.535 00:22:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:39.535 00:22:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:39.535 00:22:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:39.535 00:22:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:39.535 00:22:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:39.535 00:22:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:39.535 00:22:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:39.535 00:22:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:39.535 00:22:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:39.535 00:22:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:39.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:13:39.535 00:13:39.535 --- 10.0.0.2 ping statistics --- 00:13:39.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.535 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:39.535 00:22:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:39.535 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:39.535 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:13:39.535 00:13:39.535 --- 10.0.0.3 ping statistics --- 00:13:39.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.535 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:39.535 00:22:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:39.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:13:39.535 00:13:39.535 --- 10.0.0.1 ping statistics --- 00:13:39.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.535 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:39.535 00:22:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.535 00:22:26 -- nvmf/common.sh@421 -- # return 0 00:13:39.535 00:22:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:39.535 00:22:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.535 00:22:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:39.535 00:22:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:39.535 00:22:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.535 00:22:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:39.535 00:22:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:39.792 00:22:26 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:39.792 00:22:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:39.792 00:22:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:39.792 00:22:26 -- common/autotest_common.sh@10 -- # set +x 00:13:39.792 00:22:26 -- nvmf/common.sh@469 -- # nvmfpid=81425 00:13:39.792 00:22:26 -- nvmf/common.sh@470 -- # waitforlisten 81425 00:13:39.792 00:22:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:39.792 00:22:26 -- common/autotest_common.sh@819 -- # '[' -z 81425 ']' 00:13:39.792 00:22:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.792 00:22:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:39.792 00:22:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.792 00:22:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:39.792 00:22:26 -- common/autotest_common.sh@10 -- # set +x 00:13:39.792 [2024-07-13 00:22:26.841973] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:39.793 [2024-07-13 00:22:26.842061] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.793 [2024-07-13 00:22:26.985158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:40.050 [2024-07-13 00:22:27.077741] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:40.050 [2024-07-13 00:22:27.077904] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.050 [2024-07-13 00:22:27.077918] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.050 [2024-07-13 00:22:27.077929] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.050 [2024-07-13 00:22:27.078108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.050 [2024-07-13 00:22:27.078658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.050 [2024-07-13 00:22:27.078664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.616 00:22:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:40.616 00:22:27 -- common/autotest_common.sh@852 -- # return 0 00:13:40.616 00:22:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:40.616 00:22:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:40.616 00:22:27 -- common/autotest_common.sh@10 -- # set +x 00:13:40.874 00:22:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.874 00:22:27 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.874 00:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.874 00:22:27 -- common/autotest_common.sh@10 -- # set +x 00:13:40.874 [2024-07-13 00:22:27.867254] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.874 00:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.874 00:22:27 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:40.874 00:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.874 00:22:27 -- common/autotest_common.sh@10 -- # set +x 00:13:40.874 00:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.874 00:22:27 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.874 00:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.874 00:22:27 -- common/autotest_common.sh@10 -- # set +x 00:13:40.874 [2024-07-13 00:22:27.885416] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.874 00:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.874 00:22:27 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:40.874 00:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.874 00:22:27 -- common/autotest_common.sh@10 -- # set +x 00:13:40.874 NULL1 00:13:40.874 00:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.874 00:22:27 -- target/connect_stress.sh@21 -- # PERF_PID=81477 00:13:40.874 00:22:27 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:40.874 00:22:27 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:40.874 00:22:27 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.874 00:22:27 -- target/connect_stress.sh@28 -- # cat 00:13:40.874 00:22:27 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:40.874 00:22:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.874 00:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.874 00:22:27 -- common/autotest_common.sh@10 -- # set +x 00:13:41.132 00:22:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.132 00:22:28 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:41.132 00:22:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.132 00:22:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.132 00:22:28 -- common/autotest_common.sh@10 -- # set +x 00:13:41.699 00:22:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.699 00:22:28 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:41.699 00:22:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.699 00:22:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.699 00:22:28 -- common/autotest_common.sh@10 -- # set +x 00:13:41.957 00:22:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.957 00:22:28 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:41.957 00:22:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.957 00:22:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.957 00:22:28 -- common/autotest_common.sh@10 -- # set +x 00:13:42.245 00:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.245 00:22:29 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:42.245 00:22:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.245 00:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.245 00:22:29 -- common/autotest_common.sh@10 -- # set +x 00:13:42.504 00:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.504 00:22:29 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:42.504 00:22:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.504 00:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.504 00:22:29 -- common/autotest_common.sh@10 -- # set +x 00:13:42.763 00:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.763 00:22:29 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:42.763 00:22:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.763 00:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.763 00:22:29 -- common/autotest_common.sh@10 -- # set +x 00:13:43.330 00:22:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.330 00:22:30 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:43.330 00:22:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.330 00:22:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.330 00:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:43.590 00:22:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.590 00:22:30 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:43.590 00:22:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.590 00:22:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.590 00:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:43.849 00:22:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.849 00:22:30 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:43.849 00:22:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.849 00:22:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.849 00:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:44.108 00:22:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.108 00:22:31 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:44.108 00:22:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.108 00:22:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.108 00:22:31 -- common/autotest_common.sh@10 -- # set +x 00:13:44.367 00:22:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.367 00:22:31 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:44.367 00:22:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.367 00:22:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.367 00:22:31 -- common/autotest_common.sh@10 -- # set +x 00:13:44.935 00:22:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.935 00:22:31 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:44.935 00:22:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.935 00:22:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.935 00:22:31 -- common/autotest_common.sh@10 -- # set +x 00:13:45.194 00:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.194 00:22:32 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:45.194 00:22:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.194 00:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.194 00:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:45.453 00:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.453 00:22:32 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:45.453 00:22:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.453 00:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.453 00:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:45.712 00:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.712 00:22:32 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:45.712 00:22:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.712 00:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.712 00:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:45.970 00:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.970 00:22:33 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:45.970 00:22:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.970 00:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.970 00:22:33 -- common/autotest_common.sh@10 -- # set +x 00:13:46.537 00:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.537 00:22:33 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:46.537 00:22:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.537 00:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.537 00:22:33 -- common/autotest_common.sh@10 -- # set +x 00:13:46.804 00:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.804 00:22:33 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:46.804 00:22:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.804 00:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.804 00:22:33 -- common/autotest_common.sh@10 -- # set +x 00:13:47.098 00:22:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.098 00:22:34 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:47.098 00:22:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.099 00:22:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.099 00:22:34 -- common/autotest_common.sh@10 -- # set +x 00:13:47.380 00:22:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.380 00:22:34 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:47.380 00:22:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.380 00:22:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.380 00:22:34 -- common/autotest_common.sh@10 -- # set +x 00:13:47.657 00:22:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.657 00:22:34 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:47.657 00:22:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.657 00:22:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.657 00:22:34 -- common/autotest_common.sh@10 -- # set +x 00:13:47.916 00:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.916 00:22:35 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:47.916 00:22:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.916 00:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.916 00:22:35 -- common/autotest_common.sh@10 -- # set +x 00:13:48.485 00:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.485 00:22:35 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:48.485 00:22:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.485 00:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.485 00:22:35 -- common/autotest_common.sh@10 -- # set +x 00:13:48.744 00:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.744 00:22:35 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:48.744 00:22:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.744 00:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.744 00:22:35 -- common/autotest_common.sh@10 -- # set +x 00:13:49.003 00:22:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.003 00:22:36 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:49.003 00:22:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.003 00:22:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.003 00:22:36 -- common/autotest_common.sh@10 -- # set +x 00:13:49.262 00:22:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.262 00:22:36 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:49.262 00:22:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.262 00:22:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.262 00:22:36 -- common/autotest_common.sh@10 -- # set +x 00:13:49.521 00:22:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.521 00:22:36 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:49.521 00:22:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.521 00:22:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.521 00:22:36 -- common/autotest_common.sh@10 -- # set +x 00:13:50.090 00:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.090 00:22:37 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:50.090 00:22:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.090 00:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.090 00:22:37 -- common/autotest_common.sh@10 -- # set +x 00:13:50.349 00:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.349 00:22:37 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:50.349 00:22:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.349 00:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.349 00:22:37 -- common/autotest_common.sh@10 -- # set +x 00:13:50.608 00:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.608 00:22:37 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:50.608 00:22:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.608 00:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.608 00:22:37 -- common/autotest_common.sh@10 -- # set +x 00:13:50.867 00:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.867 00:22:38 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:50.867 00:22:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.867 00:22:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.867 00:22:38 -- common/autotest_common.sh@10 -- # set +x 00:13:51.126 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.126 00:22:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.126 00:22:38 -- target/connect_stress.sh@34 -- # kill -0 81477 00:13:51.126 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81477) - No such process 00:13:51.126 00:22:38 -- target/connect_stress.sh@38 -- # wait 81477 00:13:51.126 00:22:38 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:51.126 00:22:38 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:51.126 00:22:38 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:51.126 00:22:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:51.126 00:22:38 -- nvmf/common.sh@116 -- # sync 00:13:51.385 00:22:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:51.385 00:22:38 -- nvmf/common.sh@119 -- # set +e 00:13:51.385 00:22:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:51.385 00:22:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:51.385 rmmod nvme_tcp 00:13:51.385 rmmod nvme_fabrics 00:13:51.385 rmmod nvme_keyring 00:13:51.385 00:22:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:51.385 00:22:38 -- nvmf/common.sh@123 -- # set -e 00:13:51.385 00:22:38 -- nvmf/common.sh@124 -- # return 0 00:13:51.385 00:22:38 -- nvmf/common.sh@477 -- # '[' -n 81425 ']' 00:13:51.385 00:22:38 -- nvmf/common.sh@478 -- # killprocess 81425 00:13:51.385 00:22:38 -- common/autotest_common.sh@926 -- # '[' -z 81425 ']' 00:13:51.386 00:22:38 -- common/autotest_common.sh@930 -- # kill -0 81425 00:13:51.386 00:22:38 -- common/autotest_common.sh@931 -- # uname 00:13:51.386 00:22:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:51.386 00:22:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81425 00:13:51.386 killing process with pid 81425 00:13:51.386 00:22:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:51.386 00:22:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:51.386 00:22:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81425' 00:13:51.386 00:22:38 -- common/autotest_common.sh@945 -- # kill 81425 00:13:51.386 00:22:38 -- common/autotest_common.sh@950 -- # wait 81425 00:13:51.645 00:22:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:51.645 00:22:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:51.645 00:22:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:51.645 00:22:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.645 00:22:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:51.645 00:22:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.645 00:22:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.645 00:22:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.645 00:22:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:51.645 00:13:51.645 real 0m12.375s 00:13:51.645 user 0m41.626s 00:13:51.645 sys 0m2.976s 00:13:51.645 00:22:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.645 ************************************ 00:13:51.645 END TEST nvmf_connect_stress 00:13:51.645 00:22:38 -- common/autotest_common.sh@10 -- # set +x 00:13:51.645 ************************************ 00:13:51.645 00:22:38 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:51.645 00:22:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:51.645 00:22:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:51.645 00:22:38 -- common/autotest_common.sh@10 -- # set +x 00:13:51.645 ************************************ 00:13:51.645 START TEST nvmf_fused_ordering 00:13:51.645 ************************************ 00:13:51.645 00:22:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:51.645 * Looking for test storage... 00:13:51.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:51.645 00:22:38 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:51.645 00:22:38 -- nvmf/common.sh@7 -- # uname -s 00:13:51.645 00:22:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.645 00:22:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.645 00:22:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.645 00:22:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.645 00:22:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.645 00:22:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.645 00:22:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.645 00:22:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.645 00:22:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.645 00:22:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.645 00:22:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:13:51.645 00:22:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:13:51.645 00:22:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.645 00:22:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.645 00:22:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:51.645 00:22:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:51.645 00:22:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.645 00:22:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.645 00:22:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.645 00:22:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.646 00:22:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.646 00:22:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.646 00:22:38 -- paths/export.sh@5 -- # export PATH 00:13:51.646 00:22:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.646 00:22:38 -- nvmf/common.sh@46 -- # : 0 00:13:51.646 00:22:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:51.646 00:22:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:51.646 00:22:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:51.646 00:22:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.646 00:22:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.646 00:22:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:51.646 00:22:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:51.646 00:22:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:51.646 00:22:38 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:51.646 00:22:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:51.646 00:22:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.646 00:22:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:51.646 00:22:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:51.646 00:22:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:51.646 00:22:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.646 00:22:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.646 00:22:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.646 00:22:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:51.646 00:22:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:51.646 00:22:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:51.646 00:22:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:51.646 00:22:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:51.646 00:22:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:51.646 00:22:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.646 00:22:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.646 00:22:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:51.646 00:22:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:51.646 00:22:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:51.646 00:22:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:51.646 00:22:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:51.646 00:22:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.646 00:22:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:51.646 00:22:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:51.646 00:22:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:51.646 00:22:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:51.646 00:22:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:51.905 00:22:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:51.905 Cannot find device "nvmf_tgt_br" 00:13:51.905 00:22:38 -- nvmf/common.sh@154 -- # true 00:13:51.905 00:22:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:51.905 Cannot find device "nvmf_tgt_br2" 00:13:51.905 00:22:38 -- nvmf/common.sh@155 -- # true 00:13:51.905 00:22:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:51.905 00:22:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:51.905 Cannot find device "nvmf_tgt_br" 00:13:51.905 00:22:38 -- nvmf/common.sh@157 -- # true 00:13:51.905 00:22:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:51.905 Cannot find device "nvmf_tgt_br2" 00:13:51.905 00:22:38 -- nvmf/common.sh@158 -- # true 00:13:51.905 00:22:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:51.905 00:22:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:51.905 00:22:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:51.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.905 00:22:38 -- nvmf/common.sh@161 -- # true 00:13:51.905 00:22:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:51.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.905 00:22:38 -- nvmf/common.sh@162 -- # true 00:13:51.905 00:22:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:51.905 00:22:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:51.905 00:22:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:51.905 00:22:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:51.905 00:22:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:51.905 00:22:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:51.905 00:22:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:51.905 00:22:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:51.905 00:22:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:51.905 00:22:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:51.905 00:22:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:51.905 00:22:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:51.905 00:22:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:51.905 00:22:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:51.905 00:22:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:51.905 00:22:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:51.905 00:22:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:51.905 00:22:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:51.905 00:22:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:51.905 00:22:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:51.905 00:22:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:52.164 00:22:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:52.164 00:22:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:52.164 00:22:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:52.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:13:52.164 00:13:52.164 --- 10.0.0.2 ping statistics --- 00:13:52.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.164 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:52.164 00:22:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:52.164 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:52.164 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:13:52.164 00:13:52.164 --- 10.0.0.3 ping statistics --- 00:13:52.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.164 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:52.164 00:22:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:52.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:13:52.164 00:13:52.164 --- 10.0.0.1 ping statistics --- 00:13:52.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.164 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:13:52.164 00:22:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.164 00:22:39 -- nvmf/common.sh@421 -- # return 0 00:13:52.164 00:22:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:52.164 00:22:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.164 00:22:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:52.164 00:22:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:52.164 00:22:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.164 00:22:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:52.164 00:22:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:52.164 00:22:39 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:52.164 00:22:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:52.164 00:22:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:52.164 00:22:39 -- common/autotest_common.sh@10 -- # set +x 00:13:52.164 00:22:39 -- nvmf/common.sh@469 -- # nvmfpid=81801 00:13:52.164 00:22:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:52.164 00:22:39 -- nvmf/common.sh@470 -- # waitforlisten 81801 00:13:52.164 00:22:39 -- common/autotest_common.sh@819 -- # '[' -z 81801 ']' 00:13:52.164 00:22:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.164 00:22:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:52.164 00:22:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.164 00:22:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:52.164 00:22:39 -- common/autotest_common.sh@10 -- # set +x 00:13:52.164 [2024-07-13 00:22:39.243037] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:52.164 [2024-07-13 00:22:39.243124] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.164 [2024-07-13 00:22:39.387167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.423 [2024-07-13 00:22:39.471394] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:52.423 [2024-07-13 00:22:39.471538] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.423 [2024-07-13 00:22:39.471550] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.423 [2024-07-13 00:22:39.471559] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.423 [2024-07-13 00:22:39.471582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.360 00:22:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:53.360 00:22:40 -- common/autotest_common.sh@852 -- # return 0 00:13:53.360 00:22:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:53.360 00:22:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:53.360 00:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:53.360 00:22:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.360 00:22:40 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.360 00:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.360 00:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:53.360 [2024-07-13 00:22:40.280521] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.360 00:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.360 00:22:40 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:53.360 00:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.360 00:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:53.360 00:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.360 00:22:40 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.360 00:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.360 00:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:53.360 [2024-07-13 00:22:40.296636] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.360 00:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.360 00:22:40 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:53.360 00:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.360 00:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:53.360 NULL1 00:13:53.360 00:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.360 00:22:40 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:53.360 00:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.360 00:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:53.360 00:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.360 00:22:40 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:53.360 00:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.360 00:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:53.360 00:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.360 00:22:40 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:53.360 [2024-07-13 00:22:40.349659] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:53.360 [2024-07-13 00:22:40.349705] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81851 ] 00:13:53.619 Attached to nqn.2016-06.io.spdk:cnode1 00:13:53.619 Namespace ID: 1 size: 1GB 00:13:53.619 fused_ordering(0) 00:13:53.619 fused_ordering(1) 00:13:53.619 fused_ordering(2) 00:13:53.619 fused_ordering(3) 00:13:53.619 fused_ordering(4) 00:13:53.619 fused_ordering(5) 00:13:53.619 fused_ordering(6) 00:13:53.619 fused_ordering(7) 00:13:53.619 fused_ordering(8) 00:13:53.619 fused_ordering(9) 00:13:53.619 fused_ordering(10) 00:13:53.619 fused_ordering(11) 00:13:53.619 fused_ordering(12) 00:13:53.619 fused_ordering(13) 00:13:53.619 fused_ordering(14) 00:13:53.619 fused_ordering(15) 00:13:53.619 fused_ordering(16) 00:13:53.619 fused_ordering(17) 00:13:53.619 fused_ordering(18) 00:13:53.619 fused_ordering(19) 00:13:53.619 fused_ordering(20) 00:13:53.619 fused_ordering(21) 00:13:53.619 fused_ordering(22) 00:13:53.619 fused_ordering(23) 00:13:53.619 fused_ordering(24) 00:13:53.619 fused_ordering(25) 00:13:53.619 fused_ordering(26) 00:13:53.619 fused_ordering(27) 00:13:53.619 fused_ordering(28) 00:13:53.619 fused_ordering(29) 00:13:53.619 fused_ordering(30) 00:13:53.619 fused_ordering(31) 00:13:53.619 fused_ordering(32) 00:13:53.619 fused_ordering(33) 00:13:53.619 fused_ordering(34) 00:13:53.619 fused_ordering(35) 00:13:53.619 fused_ordering(36) 00:13:53.619 fused_ordering(37) 00:13:53.619 fused_ordering(38) 00:13:53.619 fused_ordering(39) 00:13:53.619 fused_ordering(40) 00:13:53.619 fused_ordering(41) 00:13:53.619 fused_ordering(42) 00:13:53.619 fused_ordering(43) 00:13:53.619 fused_ordering(44) 00:13:53.619 fused_ordering(45) 00:13:53.619 fused_ordering(46) 00:13:53.619 fused_ordering(47) 00:13:53.619 fused_ordering(48) 00:13:53.619 fused_ordering(49) 00:13:53.619 fused_ordering(50) 00:13:53.619 fused_ordering(51) 00:13:53.619 fused_ordering(52) 00:13:53.619 fused_ordering(53) 00:13:53.619 fused_ordering(54) 00:13:53.619 fused_ordering(55) 00:13:53.619 fused_ordering(56) 00:13:53.619 fused_ordering(57) 00:13:53.619 fused_ordering(58) 00:13:53.619 fused_ordering(59) 00:13:53.619 fused_ordering(60) 00:13:53.619 fused_ordering(61) 00:13:53.619 fused_ordering(62) 00:13:53.619 fused_ordering(63) 00:13:53.619 fused_ordering(64) 00:13:53.619 fused_ordering(65) 00:13:53.619 fused_ordering(66) 00:13:53.619 fused_ordering(67) 00:13:53.619 fused_ordering(68) 00:13:53.619 fused_ordering(69) 00:13:53.619 fused_ordering(70) 00:13:53.619 fused_ordering(71) 00:13:53.619 fused_ordering(72) 00:13:53.619 fused_ordering(73) 00:13:53.619 fused_ordering(74) 00:13:53.619 fused_ordering(75) 00:13:53.619 fused_ordering(76) 00:13:53.619 fused_ordering(77) 00:13:53.619 fused_ordering(78) 00:13:53.619 fused_ordering(79) 00:13:53.619 fused_ordering(80) 00:13:53.619 fused_ordering(81) 00:13:53.619 fused_ordering(82) 00:13:53.619 fused_ordering(83) 00:13:53.619 fused_ordering(84) 00:13:53.619 fused_ordering(85) 00:13:53.619 fused_ordering(86) 00:13:53.619 fused_ordering(87) 00:13:53.619 fused_ordering(88) 00:13:53.619 fused_ordering(89) 00:13:53.619 fused_ordering(90) 00:13:53.619 fused_ordering(91) 00:13:53.619 fused_ordering(92) 00:13:53.619 fused_ordering(93) 00:13:53.619 fused_ordering(94) 00:13:53.619 fused_ordering(95) 00:13:53.619 fused_ordering(96) 00:13:53.619 fused_ordering(97) 00:13:53.619 fused_ordering(98) 00:13:53.619 fused_ordering(99) 00:13:53.619 fused_ordering(100) 00:13:53.619 fused_ordering(101) 00:13:53.619 fused_ordering(102) 00:13:53.619 fused_ordering(103) 00:13:53.619 fused_ordering(104) 00:13:53.619 fused_ordering(105) 00:13:53.619 fused_ordering(106) 00:13:53.619 fused_ordering(107) 00:13:53.619 fused_ordering(108) 00:13:53.619 fused_ordering(109) 00:13:53.619 fused_ordering(110) 00:13:53.619 fused_ordering(111) 00:13:53.619 fused_ordering(112) 00:13:53.619 fused_ordering(113) 00:13:53.619 fused_ordering(114) 00:13:53.619 fused_ordering(115) 00:13:53.619 fused_ordering(116) 00:13:53.619 fused_ordering(117) 00:13:53.619 fused_ordering(118) 00:13:53.619 fused_ordering(119) 00:13:53.619 fused_ordering(120) 00:13:53.619 fused_ordering(121) 00:13:53.619 fused_ordering(122) 00:13:53.619 fused_ordering(123) 00:13:53.619 fused_ordering(124) 00:13:53.619 fused_ordering(125) 00:13:53.619 fused_ordering(126) 00:13:53.619 fused_ordering(127) 00:13:53.619 fused_ordering(128) 00:13:53.619 fused_ordering(129) 00:13:53.619 fused_ordering(130) 00:13:53.619 fused_ordering(131) 00:13:53.619 fused_ordering(132) 00:13:53.619 fused_ordering(133) 00:13:53.619 fused_ordering(134) 00:13:53.619 fused_ordering(135) 00:13:53.619 fused_ordering(136) 00:13:53.619 fused_ordering(137) 00:13:53.619 fused_ordering(138) 00:13:53.619 fused_ordering(139) 00:13:53.619 fused_ordering(140) 00:13:53.619 fused_ordering(141) 00:13:53.619 fused_ordering(142) 00:13:53.619 fused_ordering(143) 00:13:53.619 fused_ordering(144) 00:13:53.619 fused_ordering(145) 00:13:53.619 fused_ordering(146) 00:13:53.619 fused_ordering(147) 00:13:53.619 fused_ordering(148) 00:13:53.619 fused_ordering(149) 00:13:53.619 fused_ordering(150) 00:13:53.619 fused_ordering(151) 00:13:53.619 fused_ordering(152) 00:13:53.619 fused_ordering(153) 00:13:53.619 fused_ordering(154) 00:13:53.619 fused_ordering(155) 00:13:53.619 fused_ordering(156) 00:13:53.619 fused_ordering(157) 00:13:53.619 fused_ordering(158) 00:13:53.619 fused_ordering(159) 00:13:53.619 fused_ordering(160) 00:13:53.619 fused_ordering(161) 00:13:53.619 fused_ordering(162) 00:13:53.619 fused_ordering(163) 00:13:53.619 fused_ordering(164) 00:13:53.619 fused_ordering(165) 00:13:53.619 fused_ordering(166) 00:13:53.619 fused_ordering(167) 00:13:53.619 fused_ordering(168) 00:13:53.619 fused_ordering(169) 00:13:53.619 fused_ordering(170) 00:13:53.619 fused_ordering(171) 00:13:53.619 fused_ordering(172) 00:13:53.619 fused_ordering(173) 00:13:53.619 fused_ordering(174) 00:13:53.619 fused_ordering(175) 00:13:53.619 fused_ordering(176) 00:13:53.619 fused_ordering(177) 00:13:53.619 fused_ordering(178) 00:13:53.620 fused_ordering(179) 00:13:53.620 fused_ordering(180) 00:13:53.620 fused_ordering(181) 00:13:53.620 fused_ordering(182) 00:13:53.620 fused_ordering(183) 00:13:53.620 fused_ordering(184) 00:13:53.620 fused_ordering(185) 00:13:53.620 fused_ordering(186) 00:13:53.620 fused_ordering(187) 00:13:53.620 fused_ordering(188) 00:13:53.620 fused_ordering(189) 00:13:53.620 fused_ordering(190) 00:13:53.620 fused_ordering(191) 00:13:53.620 fused_ordering(192) 00:13:53.620 fused_ordering(193) 00:13:53.620 fused_ordering(194) 00:13:53.620 fused_ordering(195) 00:13:53.620 fused_ordering(196) 00:13:53.620 fused_ordering(197) 00:13:53.620 fused_ordering(198) 00:13:53.620 fused_ordering(199) 00:13:53.620 fused_ordering(200) 00:13:53.620 fused_ordering(201) 00:13:53.620 fused_ordering(202) 00:13:53.620 fused_ordering(203) 00:13:53.620 fused_ordering(204) 00:13:53.620 fused_ordering(205) 00:13:53.879 fused_ordering(206) 00:13:53.879 fused_ordering(207) 00:13:53.879 fused_ordering(208) 00:13:53.879 fused_ordering(209) 00:13:53.879 fused_ordering(210) 00:13:53.879 fused_ordering(211) 00:13:53.879 fused_ordering(212) 00:13:53.879 fused_ordering(213) 00:13:53.879 fused_ordering(214) 00:13:53.879 fused_ordering(215) 00:13:53.879 fused_ordering(216) 00:13:53.879 fused_ordering(217) 00:13:53.879 fused_ordering(218) 00:13:53.879 fused_ordering(219) 00:13:53.879 fused_ordering(220) 00:13:53.879 fused_ordering(221) 00:13:53.879 fused_ordering(222) 00:13:53.879 fused_ordering(223) 00:13:53.879 fused_ordering(224) 00:13:53.879 fused_ordering(225) 00:13:53.879 fused_ordering(226) 00:13:53.879 fused_ordering(227) 00:13:53.879 fused_ordering(228) 00:13:53.879 fused_ordering(229) 00:13:53.879 fused_ordering(230) 00:13:53.879 fused_ordering(231) 00:13:53.879 fused_ordering(232) 00:13:53.879 fused_ordering(233) 00:13:53.879 fused_ordering(234) 00:13:53.879 fused_ordering(235) 00:13:53.879 fused_ordering(236) 00:13:53.879 fused_ordering(237) 00:13:53.879 fused_ordering(238) 00:13:53.879 fused_ordering(239) 00:13:53.879 fused_ordering(240) 00:13:53.879 fused_ordering(241) 00:13:53.879 fused_ordering(242) 00:13:53.879 fused_ordering(243) 00:13:53.879 fused_ordering(244) 00:13:53.879 fused_ordering(245) 00:13:53.879 fused_ordering(246) 00:13:53.879 fused_ordering(247) 00:13:53.879 fused_ordering(248) 00:13:53.879 fused_ordering(249) 00:13:53.879 fused_ordering(250) 00:13:53.879 fused_ordering(251) 00:13:53.879 fused_ordering(252) 00:13:53.879 fused_ordering(253) 00:13:53.879 fused_ordering(254) 00:13:53.879 fused_ordering(255) 00:13:53.879 fused_ordering(256) 00:13:53.879 fused_ordering(257) 00:13:53.879 fused_ordering(258) 00:13:53.879 fused_ordering(259) 00:13:53.879 fused_ordering(260) 00:13:53.879 fused_ordering(261) 00:13:53.879 fused_ordering(262) 00:13:53.879 fused_ordering(263) 00:13:53.879 fused_ordering(264) 00:13:53.879 fused_ordering(265) 00:13:53.879 fused_ordering(266) 00:13:53.879 fused_ordering(267) 00:13:53.879 fused_ordering(268) 00:13:53.879 fused_ordering(269) 00:13:53.879 fused_ordering(270) 00:13:53.879 fused_ordering(271) 00:13:53.879 fused_ordering(272) 00:13:53.879 fused_ordering(273) 00:13:53.879 fused_ordering(274) 00:13:53.879 fused_ordering(275) 00:13:53.879 fused_ordering(276) 00:13:53.879 fused_ordering(277) 00:13:53.879 fused_ordering(278) 00:13:53.879 fused_ordering(279) 00:13:53.879 fused_ordering(280) 00:13:53.879 fused_ordering(281) 00:13:53.879 fused_ordering(282) 00:13:53.879 fused_ordering(283) 00:13:53.879 fused_ordering(284) 00:13:53.879 fused_ordering(285) 00:13:53.879 fused_ordering(286) 00:13:53.879 fused_ordering(287) 00:13:53.879 fused_ordering(288) 00:13:53.879 fused_ordering(289) 00:13:53.879 fused_ordering(290) 00:13:53.879 fused_ordering(291) 00:13:53.879 fused_ordering(292) 00:13:53.879 fused_ordering(293) 00:13:53.879 fused_ordering(294) 00:13:53.879 fused_ordering(295) 00:13:53.879 fused_ordering(296) 00:13:53.879 fused_ordering(297) 00:13:53.879 fused_ordering(298) 00:13:53.879 fused_ordering(299) 00:13:53.879 fused_ordering(300) 00:13:53.879 fused_ordering(301) 00:13:53.879 fused_ordering(302) 00:13:53.879 fused_ordering(303) 00:13:53.879 fused_ordering(304) 00:13:53.879 fused_ordering(305) 00:13:53.879 fused_ordering(306) 00:13:53.879 fused_ordering(307) 00:13:53.879 fused_ordering(308) 00:13:53.879 fused_ordering(309) 00:13:53.879 fused_ordering(310) 00:13:53.879 fused_ordering(311) 00:13:53.879 fused_ordering(312) 00:13:53.879 fused_ordering(313) 00:13:53.879 fused_ordering(314) 00:13:53.879 fused_ordering(315) 00:13:53.879 fused_ordering(316) 00:13:53.879 fused_ordering(317) 00:13:53.879 fused_ordering(318) 00:13:53.879 fused_ordering(319) 00:13:53.879 fused_ordering(320) 00:13:53.880 fused_ordering(321) 00:13:53.880 fused_ordering(322) 00:13:53.880 fused_ordering(323) 00:13:53.880 fused_ordering(324) 00:13:53.880 fused_ordering(325) 00:13:53.880 fused_ordering(326) 00:13:53.880 fused_ordering(327) 00:13:53.880 fused_ordering(328) 00:13:53.880 fused_ordering(329) 00:13:53.880 fused_ordering(330) 00:13:53.880 fused_ordering(331) 00:13:53.880 fused_ordering(332) 00:13:53.880 fused_ordering(333) 00:13:53.880 fused_ordering(334) 00:13:53.880 fused_ordering(335) 00:13:53.880 fused_ordering(336) 00:13:53.880 fused_ordering(337) 00:13:53.880 fused_ordering(338) 00:13:53.880 fused_ordering(339) 00:13:53.880 fused_ordering(340) 00:13:53.880 fused_ordering(341) 00:13:53.880 fused_ordering(342) 00:13:53.880 fused_ordering(343) 00:13:53.880 fused_ordering(344) 00:13:53.880 fused_ordering(345) 00:13:53.880 fused_ordering(346) 00:13:53.880 fused_ordering(347) 00:13:53.880 fused_ordering(348) 00:13:53.880 fused_ordering(349) 00:13:53.880 fused_ordering(350) 00:13:53.880 fused_ordering(351) 00:13:53.880 fused_ordering(352) 00:13:53.880 fused_ordering(353) 00:13:53.880 fused_ordering(354) 00:13:53.880 fused_ordering(355) 00:13:53.880 fused_ordering(356) 00:13:53.880 fused_ordering(357) 00:13:53.880 fused_ordering(358) 00:13:53.880 fused_ordering(359) 00:13:53.880 fused_ordering(360) 00:13:53.880 fused_ordering(361) 00:13:53.880 fused_ordering(362) 00:13:53.880 fused_ordering(363) 00:13:53.880 fused_ordering(364) 00:13:53.880 fused_ordering(365) 00:13:53.880 fused_ordering(366) 00:13:53.880 fused_ordering(367) 00:13:53.880 fused_ordering(368) 00:13:53.880 fused_ordering(369) 00:13:53.880 fused_ordering(370) 00:13:53.880 fused_ordering(371) 00:13:53.880 fused_ordering(372) 00:13:53.880 fused_ordering(373) 00:13:53.880 fused_ordering(374) 00:13:53.880 fused_ordering(375) 00:13:53.880 fused_ordering(376) 00:13:53.880 fused_ordering(377) 00:13:53.880 fused_ordering(378) 00:13:53.880 fused_ordering(379) 00:13:53.880 fused_ordering(380) 00:13:53.880 fused_ordering(381) 00:13:53.880 fused_ordering(382) 00:13:53.880 fused_ordering(383) 00:13:53.880 fused_ordering(384) 00:13:53.880 fused_ordering(385) 00:13:53.880 fused_ordering(386) 00:13:53.880 fused_ordering(387) 00:13:53.880 fused_ordering(388) 00:13:53.880 fused_ordering(389) 00:13:53.880 fused_ordering(390) 00:13:53.880 fused_ordering(391) 00:13:53.880 fused_ordering(392) 00:13:53.880 fused_ordering(393) 00:13:53.880 fused_ordering(394) 00:13:53.880 fused_ordering(395) 00:13:53.880 fused_ordering(396) 00:13:53.880 fused_ordering(397) 00:13:53.880 fused_ordering(398) 00:13:53.880 fused_ordering(399) 00:13:53.880 fused_ordering(400) 00:13:53.880 fused_ordering(401) 00:13:53.880 fused_ordering(402) 00:13:53.880 fused_ordering(403) 00:13:53.880 fused_ordering(404) 00:13:53.880 fused_ordering(405) 00:13:53.880 fused_ordering(406) 00:13:53.880 fused_ordering(407) 00:13:53.880 fused_ordering(408) 00:13:53.880 fused_ordering(409) 00:13:53.880 fused_ordering(410) 00:13:54.139 fused_ordering(411) 00:13:54.139 fused_ordering(412) 00:13:54.139 fused_ordering(413) 00:13:54.139 fused_ordering(414) 00:13:54.139 fused_ordering(415) 00:13:54.139 fused_ordering(416) 00:13:54.139 fused_ordering(417) 00:13:54.139 fused_ordering(418) 00:13:54.139 fused_ordering(419) 00:13:54.139 fused_ordering(420) 00:13:54.139 fused_ordering(421) 00:13:54.139 fused_ordering(422) 00:13:54.139 fused_ordering(423) 00:13:54.139 fused_ordering(424) 00:13:54.139 fused_ordering(425) 00:13:54.139 fused_ordering(426) 00:13:54.139 fused_ordering(427) 00:13:54.139 fused_ordering(428) 00:13:54.139 fused_ordering(429) 00:13:54.139 fused_ordering(430) 00:13:54.139 fused_ordering(431) 00:13:54.139 fused_ordering(432) 00:13:54.139 fused_ordering(433) 00:13:54.139 fused_ordering(434) 00:13:54.139 fused_ordering(435) 00:13:54.139 fused_ordering(436) 00:13:54.139 fused_ordering(437) 00:13:54.139 fused_ordering(438) 00:13:54.139 fused_ordering(439) 00:13:54.139 fused_ordering(440) 00:13:54.139 fused_ordering(441) 00:13:54.139 fused_ordering(442) 00:13:54.139 fused_ordering(443) 00:13:54.139 fused_ordering(444) 00:13:54.139 fused_ordering(445) 00:13:54.139 fused_ordering(446) 00:13:54.139 fused_ordering(447) 00:13:54.139 fused_ordering(448) 00:13:54.139 fused_ordering(449) 00:13:54.139 fused_ordering(450) 00:13:54.139 fused_ordering(451) 00:13:54.139 fused_ordering(452) 00:13:54.139 fused_ordering(453) 00:13:54.139 fused_ordering(454) 00:13:54.139 fused_ordering(455) 00:13:54.139 fused_ordering(456) 00:13:54.139 fused_ordering(457) 00:13:54.139 fused_ordering(458) 00:13:54.139 fused_ordering(459) 00:13:54.139 fused_ordering(460) 00:13:54.139 fused_ordering(461) 00:13:54.139 fused_ordering(462) 00:13:54.139 fused_ordering(463) 00:13:54.139 fused_ordering(464) 00:13:54.139 fused_ordering(465) 00:13:54.139 fused_ordering(466) 00:13:54.139 fused_ordering(467) 00:13:54.139 fused_ordering(468) 00:13:54.139 fused_ordering(469) 00:13:54.139 fused_ordering(470) 00:13:54.139 fused_ordering(471) 00:13:54.139 fused_ordering(472) 00:13:54.139 fused_ordering(473) 00:13:54.139 fused_ordering(474) 00:13:54.139 fused_ordering(475) 00:13:54.139 fused_ordering(476) 00:13:54.139 fused_ordering(477) 00:13:54.139 fused_ordering(478) 00:13:54.139 fused_ordering(479) 00:13:54.139 fused_ordering(480) 00:13:54.139 fused_ordering(481) 00:13:54.139 fused_ordering(482) 00:13:54.139 fused_ordering(483) 00:13:54.139 fused_ordering(484) 00:13:54.139 fused_ordering(485) 00:13:54.139 fused_ordering(486) 00:13:54.139 fused_ordering(487) 00:13:54.139 fused_ordering(488) 00:13:54.139 fused_ordering(489) 00:13:54.139 fused_ordering(490) 00:13:54.139 fused_ordering(491) 00:13:54.139 fused_ordering(492) 00:13:54.139 fused_ordering(493) 00:13:54.139 fused_ordering(494) 00:13:54.139 fused_ordering(495) 00:13:54.139 fused_ordering(496) 00:13:54.139 fused_ordering(497) 00:13:54.139 fused_ordering(498) 00:13:54.139 fused_ordering(499) 00:13:54.139 fused_ordering(500) 00:13:54.139 fused_ordering(501) 00:13:54.139 fused_ordering(502) 00:13:54.139 fused_ordering(503) 00:13:54.139 fused_ordering(504) 00:13:54.139 fused_ordering(505) 00:13:54.139 fused_ordering(506) 00:13:54.139 fused_ordering(507) 00:13:54.139 fused_ordering(508) 00:13:54.139 fused_ordering(509) 00:13:54.139 fused_ordering(510) 00:13:54.139 fused_ordering(511) 00:13:54.139 fused_ordering(512) 00:13:54.139 fused_ordering(513) 00:13:54.139 fused_ordering(514) 00:13:54.139 fused_ordering(515) 00:13:54.139 fused_ordering(516) 00:13:54.139 fused_ordering(517) 00:13:54.139 fused_ordering(518) 00:13:54.139 fused_ordering(519) 00:13:54.139 fused_ordering(520) 00:13:54.139 fused_ordering(521) 00:13:54.139 fused_ordering(522) 00:13:54.139 fused_ordering(523) 00:13:54.139 fused_ordering(524) 00:13:54.139 fused_ordering(525) 00:13:54.139 fused_ordering(526) 00:13:54.139 fused_ordering(527) 00:13:54.139 fused_ordering(528) 00:13:54.139 fused_ordering(529) 00:13:54.139 fused_ordering(530) 00:13:54.139 fused_ordering(531) 00:13:54.139 fused_ordering(532) 00:13:54.139 fused_ordering(533) 00:13:54.139 fused_ordering(534) 00:13:54.139 fused_ordering(535) 00:13:54.139 fused_ordering(536) 00:13:54.139 fused_ordering(537) 00:13:54.139 fused_ordering(538) 00:13:54.139 fused_ordering(539) 00:13:54.139 fused_ordering(540) 00:13:54.139 fused_ordering(541) 00:13:54.139 fused_ordering(542) 00:13:54.139 fused_ordering(543) 00:13:54.139 fused_ordering(544) 00:13:54.139 fused_ordering(545) 00:13:54.139 fused_ordering(546) 00:13:54.139 fused_ordering(547) 00:13:54.139 fused_ordering(548) 00:13:54.139 fused_ordering(549) 00:13:54.139 fused_ordering(550) 00:13:54.139 fused_ordering(551) 00:13:54.139 fused_ordering(552) 00:13:54.139 fused_ordering(553) 00:13:54.139 fused_ordering(554) 00:13:54.139 fused_ordering(555) 00:13:54.139 fused_ordering(556) 00:13:54.139 fused_ordering(557) 00:13:54.139 fused_ordering(558) 00:13:54.139 fused_ordering(559) 00:13:54.139 fused_ordering(560) 00:13:54.139 fused_ordering(561) 00:13:54.139 fused_ordering(562) 00:13:54.139 fused_ordering(563) 00:13:54.139 fused_ordering(564) 00:13:54.139 fused_ordering(565) 00:13:54.139 fused_ordering(566) 00:13:54.139 fused_ordering(567) 00:13:54.139 fused_ordering(568) 00:13:54.139 fused_ordering(569) 00:13:54.139 fused_ordering(570) 00:13:54.139 fused_ordering(571) 00:13:54.139 fused_ordering(572) 00:13:54.139 fused_ordering(573) 00:13:54.139 fused_ordering(574) 00:13:54.139 fused_ordering(575) 00:13:54.139 fused_ordering(576) 00:13:54.139 fused_ordering(577) 00:13:54.139 fused_ordering(578) 00:13:54.139 fused_ordering(579) 00:13:54.139 fused_ordering(580) 00:13:54.139 fused_ordering(581) 00:13:54.139 fused_ordering(582) 00:13:54.139 fused_ordering(583) 00:13:54.139 fused_ordering(584) 00:13:54.139 fused_ordering(585) 00:13:54.139 fused_ordering(586) 00:13:54.139 fused_ordering(587) 00:13:54.139 fused_ordering(588) 00:13:54.139 fused_ordering(589) 00:13:54.139 fused_ordering(590) 00:13:54.139 fused_ordering(591) 00:13:54.139 fused_ordering(592) 00:13:54.139 fused_ordering(593) 00:13:54.139 fused_ordering(594) 00:13:54.139 fused_ordering(595) 00:13:54.139 fused_ordering(596) 00:13:54.139 fused_ordering(597) 00:13:54.139 fused_ordering(598) 00:13:54.139 fused_ordering(599) 00:13:54.139 fused_ordering(600) 00:13:54.139 fused_ordering(601) 00:13:54.139 fused_ordering(602) 00:13:54.139 fused_ordering(603) 00:13:54.139 fused_ordering(604) 00:13:54.139 fused_ordering(605) 00:13:54.139 fused_ordering(606) 00:13:54.139 fused_ordering(607) 00:13:54.139 fused_ordering(608) 00:13:54.139 fused_ordering(609) 00:13:54.139 fused_ordering(610) 00:13:54.139 fused_ordering(611) 00:13:54.140 fused_ordering(612) 00:13:54.140 fused_ordering(613) 00:13:54.140 fused_ordering(614) 00:13:54.140 fused_ordering(615) 00:13:54.707 fused_ordering(616) 00:13:54.707 fused_ordering(617) 00:13:54.707 fused_ordering(618) 00:13:54.707 fused_ordering(619) 00:13:54.707 fused_ordering(620) 00:13:54.707 fused_ordering(621) 00:13:54.707 fused_ordering(622) 00:13:54.707 fused_ordering(623) 00:13:54.707 fused_ordering(624) 00:13:54.707 fused_ordering(625) 00:13:54.707 fused_ordering(626) 00:13:54.707 fused_ordering(627) 00:13:54.707 fused_ordering(628) 00:13:54.707 fused_ordering(629) 00:13:54.707 fused_ordering(630) 00:13:54.707 fused_ordering(631) 00:13:54.707 fused_ordering(632) 00:13:54.707 fused_ordering(633) 00:13:54.707 fused_ordering(634) 00:13:54.707 fused_ordering(635) 00:13:54.707 fused_ordering(636) 00:13:54.707 fused_ordering(637) 00:13:54.707 fused_ordering(638) 00:13:54.707 fused_ordering(639) 00:13:54.707 fused_ordering(640) 00:13:54.707 fused_ordering(641) 00:13:54.707 fused_ordering(642) 00:13:54.707 fused_ordering(643) 00:13:54.707 fused_ordering(644) 00:13:54.707 fused_ordering(645) 00:13:54.707 fused_ordering(646) 00:13:54.707 fused_ordering(647) 00:13:54.707 fused_ordering(648) 00:13:54.707 fused_ordering(649) 00:13:54.707 fused_ordering(650) 00:13:54.707 fused_ordering(651) 00:13:54.707 fused_ordering(652) 00:13:54.707 fused_ordering(653) 00:13:54.707 fused_ordering(654) 00:13:54.707 fused_ordering(655) 00:13:54.707 fused_ordering(656) 00:13:54.707 fused_ordering(657) 00:13:54.707 fused_ordering(658) 00:13:54.707 fused_ordering(659) 00:13:54.707 fused_ordering(660) 00:13:54.707 fused_ordering(661) 00:13:54.707 fused_ordering(662) 00:13:54.707 fused_ordering(663) 00:13:54.707 fused_ordering(664) 00:13:54.707 fused_ordering(665) 00:13:54.707 fused_ordering(666) 00:13:54.707 fused_ordering(667) 00:13:54.707 fused_ordering(668) 00:13:54.707 fused_ordering(669) 00:13:54.707 fused_ordering(670) 00:13:54.707 fused_ordering(671) 00:13:54.707 fused_ordering(672) 00:13:54.707 fused_ordering(673) 00:13:54.707 fused_ordering(674) 00:13:54.708 fused_ordering(675) 00:13:54.708 fused_ordering(676) 00:13:54.708 fused_ordering(677) 00:13:54.708 fused_ordering(678) 00:13:54.708 fused_ordering(679) 00:13:54.708 fused_ordering(680) 00:13:54.708 fused_ordering(681) 00:13:54.708 fused_ordering(682) 00:13:54.708 fused_ordering(683) 00:13:54.708 fused_ordering(684) 00:13:54.708 fused_ordering(685) 00:13:54.708 fused_ordering(686) 00:13:54.708 fused_ordering(687) 00:13:54.708 fused_ordering(688) 00:13:54.708 fused_ordering(689) 00:13:54.708 fused_ordering(690) 00:13:54.708 fused_ordering(691) 00:13:54.708 fused_ordering(692) 00:13:54.708 fused_ordering(693) 00:13:54.708 fused_ordering(694) 00:13:54.708 fused_ordering(695) 00:13:54.708 fused_ordering(696) 00:13:54.708 fused_ordering(697) 00:13:54.708 fused_ordering(698) 00:13:54.708 fused_ordering(699) 00:13:54.708 fused_ordering(700) 00:13:54.708 fused_ordering(701) 00:13:54.708 fused_ordering(702) 00:13:54.708 fused_ordering(703) 00:13:54.708 fused_ordering(704) 00:13:54.708 fused_ordering(705) 00:13:54.708 fused_ordering(706) 00:13:54.708 fused_ordering(707) 00:13:54.708 fused_ordering(708) 00:13:54.708 fused_ordering(709) 00:13:54.708 fused_ordering(710) 00:13:54.708 fused_ordering(711) 00:13:54.708 fused_ordering(712) 00:13:54.708 fused_ordering(713) 00:13:54.708 fused_ordering(714) 00:13:54.708 fused_ordering(715) 00:13:54.708 fused_ordering(716) 00:13:54.708 fused_ordering(717) 00:13:54.708 fused_ordering(718) 00:13:54.708 fused_ordering(719) 00:13:54.708 fused_ordering(720) 00:13:54.708 fused_ordering(721) 00:13:54.708 fused_ordering(722) 00:13:54.708 fused_ordering(723) 00:13:54.708 fused_ordering(724) 00:13:54.708 fused_ordering(725) 00:13:54.708 fused_ordering(726) 00:13:54.708 fused_ordering(727) 00:13:54.708 fused_ordering(728) 00:13:54.708 fused_ordering(729) 00:13:54.708 fused_ordering(730) 00:13:54.708 fused_ordering(731) 00:13:54.708 fused_ordering(732) 00:13:54.708 fused_ordering(733) 00:13:54.708 fused_ordering(734) 00:13:54.708 fused_ordering(735) 00:13:54.708 fused_ordering(736) 00:13:54.708 fused_ordering(737) 00:13:54.708 fused_ordering(738) 00:13:54.708 fused_ordering(739) 00:13:54.708 fused_ordering(740) 00:13:54.708 fused_ordering(741) 00:13:54.708 fused_ordering(742) 00:13:54.708 fused_ordering(743) 00:13:54.708 fused_ordering(744) 00:13:54.708 fused_ordering(745) 00:13:54.708 fused_ordering(746) 00:13:54.708 fused_ordering(747) 00:13:54.708 fused_ordering(748) 00:13:54.708 fused_ordering(749) 00:13:54.708 fused_ordering(750) 00:13:54.708 fused_ordering(751) 00:13:54.708 fused_ordering(752) 00:13:54.708 fused_ordering(753) 00:13:54.708 fused_ordering(754) 00:13:54.708 fused_ordering(755) 00:13:54.708 fused_ordering(756) 00:13:54.708 fused_ordering(757) 00:13:54.708 fused_ordering(758) 00:13:54.708 fused_ordering(759) 00:13:54.708 fused_ordering(760) 00:13:54.708 fused_ordering(761) 00:13:54.708 fused_ordering(762) 00:13:54.708 fused_ordering(763) 00:13:54.708 fused_ordering(764) 00:13:54.708 fused_ordering(765) 00:13:54.708 fused_ordering(766) 00:13:54.708 fused_ordering(767) 00:13:54.708 fused_ordering(768) 00:13:54.708 fused_ordering(769) 00:13:54.708 fused_ordering(770) 00:13:54.708 fused_ordering(771) 00:13:54.708 fused_ordering(772) 00:13:54.708 fused_ordering(773) 00:13:54.708 fused_ordering(774) 00:13:54.708 fused_ordering(775) 00:13:54.708 fused_ordering(776) 00:13:54.708 fused_ordering(777) 00:13:54.708 fused_ordering(778) 00:13:54.708 fused_ordering(779) 00:13:54.708 fused_ordering(780) 00:13:54.708 fused_ordering(781) 00:13:54.708 fused_ordering(782) 00:13:54.708 fused_ordering(783) 00:13:54.708 fused_ordering(784) 00:13:54.708 fused_ordering(785) 00:13:54.708 fused_ordering(786) 00:13:54.708 fused_ordering(787) 00:13:54.708 fused_ordering(788) 00:13:54.708 fused_ordering(789) 00:13:54.708 fused_ordering(790) 00:13:54.708 fused_ordering(791) 00:13:54.708 fused_ordering(792) 00:13:54.708 fused_ordering(793) 00:13:54.708 fused_ordering(794) 00:13:54.708 fused_ordering(795) 00:13:54.708 fused_ordering(796) 00:13:54.708 fused_ordering(797) 00:13:54.708 fused_ordering(798) 00:13:54.708 fused_ordering(799) 00:13:54.708 fused_ordering(800) 00:13:54.708 fused_ordering(801) 00:13:54.708 fused_ordering(802) 00:13:54.708 fused_ordering(803) 00:13:54.708 fused_ordering(804) 00:13:54.708 fused_ordering(805) 00:13:54.708 fused_ordering(806) 00:13:54.708 fused_ordering(807) 00:13:54.708 fused_ordering(808) 00:13:54.708 fused_ordering(809) 00:13:54.708 fused_ordering(810) 00:13:54.708 fused_ordering(811) 00:13:54.708 fused_ordering(812) 00:13:54.708 fused_ordering(813) 00:13:54.708 fused_ordering(814) 00:13:54.708 fused_ordering(815) 00:13:54.708 fused_ordering(816) 00:13:54.708 fused_ordering(817) 00:13:54.708 fused_ordering(818) 00:13:54.708 fused_ordering(819) 00:13:54.708 fused_ordering(820) 00:13:55.276 fused_ordering(821) 00:13:55.276 fused_ordering(822) 00:13:55.276 fused_ordering(823) 00:13:55.276 fused_ordering(824) 00:13:55.276 fused_ordering(825) 00:13:55.276 fused_ordering(826) 00:13:55.276 fused_ordering(827) 00:13:55.276 fused_ordering(828) 00:13:55.276 fused_ordering(829) 00:13:55.276 fused_ordering(830) 00:13:55.276 fused_ordering(831) 00:13:55.276 fused_ordering(832) 00:13:55.276 fused_ordering(833) 00:13:55.276 fused_ordering(834) 00:13:55.276 fused_ordering(835) 00:13:55.276 fused_ordering(836) 00:13:55.276 fused_ordering(837) 00:13:55.276 fused_ordering(838) 00:13:55.276 fused_ordering(839) 00:13:55.276 fused_ordering(840) 00:13:55.276 fused_ordering(841) 00:13:55.276 fused_ordering(842) 00:13:55.276 fused_ordering(843) 00:13:55.276 fused_ordering(844) 00:13:55.276 fused_ordering(845) 00:13:55.276 fused_ordering(846) 00:13:55.276 fused_ordering(847) 00:13:55.276 fused_ordering(848) 00:13:55.276 fused_ordering(849) 00:13:55.276 fused_ordering(850) 00:13:55.276 fused_ordering(851) 00:13:55.276 fused_ordering(852) 00:13:55.276 fused_ordering(853) 00:13:55.276 fused_ordering(854) 00:13:55.276 fused_ordering(855) 00:13:55.276 fused_ordering(856) 00:13:55.276 fused_ordering(857) 00:13:55.276 fused_ordering(858) 00:13:55.276 fused_ordering(859) 00:13:55.276 fused_ordering(860) 00:13:55.276 fused_ordering(861) 00:13:55.276 fused_ordering(862) 00:13:55.276 fused_ordering(863) 00:13:55.276 fused_ordering(864) 00:13:55.276 fused_ordering(865) 00:13:55.276 fused_ordering(866) 00:13:55.276 fused_ordering(867) 00:13:55.276 fused_ordering(868) 00:13:55.276 fused_ordering(869) 00:13:55.276 fused_ordering(870) 00:13:55.276 fused_ordering(871) 00:13:55.276 fused_ordering(872) 00:13:55.276 fused_ordering(873) 00:13:55.276 fused_ordering(874) 00:13:55.276 fused_ordering(875) 00:13:55.276 fused_ordering(876) 00:13:55.276 fused_ordering(877) 00:13:55.276 fused_ordering(878) 00:13:55.276 fused_ordering(879) 00:13:55.276 fused_ordering(880) 00:13:55.276 fused_ordering(881) 00:13:55.276 fused_ordering(882) 00:13:55.276 fused_ordering(883) 00:13:55.276 fused_ordering(884) 00:13:55.276 fused_ordering(885) 00:13:55.276 fused_ordering(886) 00:13:55.276 fused_ordering(887) 00:13:55.276 fused_ordering(888) 00:13:55.276 fused_ordering(889) 00:13:55.276 fused_ordering(890) 00:13:55.276 fused_ordering(891) 00:13:55.276 fused_ordering(892) 00:13:55.276 fused_ordering(893) 00:13:55.276 fused_ordering(894) 00:13:55.276 fused_ordering(895) 00:13:55.276 fused_ordering(896) 00:13:55.276 fused_ordering(897) 00:13:55.276 fused_ordering(898) 00:13:55.276 fused_ordering(899) 00:13:55.276 fused_ordering(900) 00:13:55.276 fused_ordering(901) 00:13:55.276 fused_ordering(902) 00:13:55.276 fused_ordering(903) 00:13:55.276 fused_ordering(904) 00:13:55.276 fused_ordering(905) 00:13:55.276 fused_ordering(906) 00:13:55.276 fused_ordering(907) 00:13:55.276 fused_ordering(908) 00:13:55.276 fused_ordering(909) 00:13:55.276 fused_ordering(910) 00:13:55.276 fused_ordering(911) 00:13:55.276 fused_ordering(912) 00:13:55.276 fused_ordering(913) 00:13:55.276 fused_ordering(914) 00:13:55.276 fused_ordering(915) 00:13:55.276 fused_ordering(916) 00:13:55.276 fused_ordering(917) 00:13:55.276 fused_ordering(918) 00:13:55.276 fused_ordering(919) 00:13:55.276 fused_ordering(920) 00:13:55.276 fused_ordering(921) 00:13:55.276 fused_ordering(922) 00:13:55.276 fused_ordering(923) 00:13:55.276 fused_ordering(924) 00:13:55.276 fused_ordering(925) 00:13:55.276 fused_ordering(926) 00:13:55.276 fused_ordering(927) 00:13:55.276 fused_ordering(928) 00:13:55.276 fused_ordering(929) 00:13:55.276 fused_ordering(930) 00:13:55.276 fused_ordering(931) 00:13:55.276 fused_ordering(932) 00:13:55.276 fused_ordering(933) 00:13:55.276 fused_ordering(934) 00:13:55.276 fused_ordering(935) 00:13:55.276 fused_ordering(936) 00:13:55.276 fused_ordering(937) 00:13:55.276 fused_ordering(938) 00:13:55.276 fused_ordering(939) 00:13:55.276 fused_ordering(940) 00:13:55.276 fused_ordering(941) 00:13:55.276 fused_ordering(942) 00:13:55.276 fused_ordering(943) 00:13:55.276 fused_ordering(944) 00:13:55.276 fused_ordering(945) 00:13:55.276 fused_ordering(946) 00:13:55.276 fused_ordering(947) 00:13:55.276 fused_ordering(948) 00:13:55.276 fused_ordering(949) 00:13:55.276 fused_ordering(950) 00:13:55.276 fused_ordering(951) 00:13:55.276 fused_ordering(952) 00:13:55.276 fused_ordering(953) 00:13:55.276 fused_ordering(954) 00:13:55.276 fused_ordering(955) 00:13:55.276 fused_ordering(956) 00:13:55.276 fused_ordering(957) 00:13:55.276 fused_ordering(958) 00:13:55.276 fused_ordering(959) 00:13:55.276 fused_ordering(960) 00:13:55.276 fused_ordering(961) 00:13:55.276 fused_ordering(962) 00:13:55.277 fused_ordering(963) 00:13:55.277 fused_ordering(964) 00:13:55.277 fused_ordering(965) 00:13:55.277 fused_ordering(966) 00:13:55.277 fused_ordering(967) 00:13:55.277 fused_ordering(968) 00:13:55.277 fused_ordering(969) 00:13:55.277 fused_ordering(970) 00:13:55.277 fused_ordering(971) 00:13:55.277 fused_ordering(972) 00:13:55.277 fused_ordering(973) 00:13:55.277 fused_ordering(974) 00:13:55.277 fused_ordering(975) 00:13:55.277 fused_ordering(976) 00:13:55.277 fused_ordering(977) 00:13:55.277 fused_ordering(978) 00:13:55.277 fused_ordering(979) 00:13:55.277 fused_ordering(980) 00:13:55.277 fused_ordering(981) 00:13:55.277 fused_ordering(982) 00:13:55.277 fused_ordering(983) 00:13:55.277 fused_ordering(984) 00:13:55.277 fused_ordering(985) 00:13:55.277 fused_ordering(986) 00:13:55.277 fused_ordering(987) 00:13:55.277 fused_ordering(988) 00:13:55.277 fused_ordering(989) 00:13:55.277 fused_ordering(990) 00:13:55.277 fused_ordering(991) 00:13:55.277 fused_ordering(992) 00:13:55.277 fused_ordering(993) 00:13:55.277 fused_ordering(994) 00:13:55.277 fused_ordering(995) 00:13:55.277 fused_ordering(996) 00:13:55.277 fused_ordering(997) 00:13:55.277 fused_ordering(998) 00:13:55.277 fused_ordering(999) 00:13:55.277 fused_ordering(1000) 00:13:55.277 fused_ordering(1001) 00:13:55.277 fused_ordering(1002) 00:13:55.277 fused_ordering(1003) 00:13:55.277 fused_ordering(1004) 00:13:55.277 fused_ordering(1005) 00:13:55.277 fused_ordering(1006) 00:13:55.277 fused_ordering(1007) 00:13:55.277 fused_ordering(1008) 00:13:55.277 fused_ordering(1009) 00:13:55.277 fused_ordering(1010) 00:13:55.277 fused_ordering(1011) 00:13:55.277 fused_ordering(1012) 00:13:55.277 fused_ordering(1013) 00:13:55.277 fused_ordering(1014) 00:13:55.277 fused_ordering(1015) 00:13:55.277 fused_ordering(1016) 00:13:55.277 fused_ordering(1017) 00:13:55.277 fused_ordering(1018) 00:13:55.277 fused_ordering(1019) 00:13:55.277 fused_ordering(1020) 00:13:55.277 fused_ordering(1021) 00:13:55.277 fused_ordering(1022) 00:13:55.277 fused_ordering(1023) 00:13:55.277 00:22:42 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:55.277 00:22:42 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:55.277 00:22:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:55.277 00:22:42 -- nvmf/common.sh@116 -- # sync 00:13:55.277 00:22:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:55.277 00:22:42 -- nvmf/common.sh@119 -- # set +e 00:13:55.277 00:22:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:55.277 00:22:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:55.277 rmmod nvme_tcp 00:13:55.277 rmmod nvme_fabrics 00:13:55.277 rmmod nvme_keyring 00:13:55.277 00:22:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:55.277 00:22:42 -- nvmf/common.sh@123 -- # set -e 00:13:55.277 00:22:42 -- nvmf/common.sh@124 -- # return 0 00:13:55.277 00:22:42 -- nvmf/common.sh@477 -- # '[' -n 81801 ']' 00:13:55.277 00:22:42 -- nvmf/common.sh@478 -- # killprocess 81801 00:13:55.277 00:22:42 -- common/autotest_common.sh@926 -- # '[' -z 81801 ']' 00:13:55.277 00:22:42 -- common/autotest_common.sh@930 -- # kill -0 81801 00:13:55.277 00:22:42 -- common/autotest_common.sh@931 -- # uname 00:13:55.277 00:22:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:55.277 00:22:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81801 00:13:55.277 killing process with pid 81801 00:13:55.277 00:22:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:55.277 00:22:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:55.277 00:22:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81801' 00:13:55.277 00:22:42 -- common/autotest_common.sh@945 -- # kill 81801 00:13:55.277 00:22:42 -- common/autotest_common.sh@950 -- # wait 81801 00:13:55.536 00:22:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:55.536 00:22:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:55.536 00:22:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:55.536 00:22:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.536 00:22:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:55.536 00:22:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.536 00:22:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.536 00:22:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.536 00:22:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:55.536 00:13:55.536 real 0m3.836s 00:13:55.536 user 0m4.553s 00:13:55.536 sys 0m1.298s 00:13:55.536 00:22:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.536 00:22:42 -- common/autotest_common.sh@10 -- # set +x 00:13:55.536 ************************************ 00:13:55.536 END TEST nvmf_fused_ordering 00:13:55.536 ************************************ 00:13:55.536 00:22:42 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:55.536 00:22:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:55.536 00:22:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:55.536 00:22:42 -- common/autotest_common.sh@10 -- # set +x 00:13:55.536 ************************************ 00:13:55.536 START TEST nvmf_delete_subsystem 00:13:55.536 ************************************ 00:13:55.536 00:22:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:55.536 * Looking for test storage... 00:13:55.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:55.536 00:22:42 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:55.536 00:22:42 -- nvmf/common.sh@7 -- # uname -s 00:13:55.536 00:22:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.536 00:22:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.536 00:22:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.536 00:22:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.536 00:22:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.536 00:22:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.536 00:22:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.536 00:22:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.536 00:22:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.536 00:22:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.536 00:22:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:13:55.536 00:22:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:13:55.536 00:22:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.536 00:22:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.537 00:22:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:55.537 00:22:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:55.537 00:22:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.537 00:22:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.537 00:22:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.537 00:22:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.537 00:22:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.537 00:22:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.537 00:22:42 -- paths/export.sh@5 -- # export PATH 00:13:55.537 00:22:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.537 00:22:42 -- nvmf/common.sh@46 -- # : 0 00:13:55.537 00:22:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:55.537 00:22:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:55.537 00:22:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:55.537 00:22:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.537 00:22:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.537 00:22:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:55.537 00:22:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:55.537 00:22:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:55.537 00:22:42 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:55.537 00:22:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:55.537 00:22:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.537 00:22:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:55.537 00:22:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:55.537 00:22:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:55.537 00:22:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.537 00:22:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.537 00:22:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.537 00:22:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:55.537 00:22:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:55.537 00:22:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:55.537 00:22:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:55.537 00:22:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:55.537 00:22:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:55.537 00:22:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.537 00:22:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.537 00:22:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:55.537 00:22:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:55.537 00:22:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:55.537 00:22:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:55.537 00:22:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:55.537 00:22:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.537 00:22:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:55.537 00:22:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:55.537 00:22:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:55.537 00:22:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:55.537 00:22:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:55.537 00:22:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:55.798 Cannot find device "nvmf_tgt_br" 00:13:55.798 00:22:42 -- nvmf/common.sh@154 -- # true 00:13:55.798 00:22:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:55.798 Cannot find device "nvmf_tgt_br2" 00:13:55.798 00:22:42 -- nvmf/common.sh@155 -- # true 00:13:55.798 00:22:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:55.798 00:22:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:55.798 Cannot find device "nvmf_tgt_br" 00:13:55.798 00:22:42 -- nvmf/common.sh@157 -- # true 00:13:55.798 00:22:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:55.798 Cannot find device "nvmf_tgt_br2" 00:13:55.798 00:22:42 -- nvmf/common.sh@158 -- # true 00:13:55.798 00:22:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:55.798 00:22:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:55.798 00:22:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:55.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:55.798 00:22:42 -- nvmf/common.sh@161 -- # true 00:13:55.798 00:22:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:55.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:55.798 00:22:42 -- nvmf/common.sh@162 -- # true 00:13:55.798 00:22:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:55.798 00:22:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:55.798 00:22:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:55.798 00:22:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:55.798 00:22:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:55.798 00:22:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:55.798 00:22:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:55.798 00:22:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:55.798 00:22:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:55.798 00:22:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:55.798 00:22:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:55.798 00:22:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:55.798 00:22:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:55.798 00:22:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:55.798 00:22:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:55.798 00:22:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:55.798 00:22:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:55.798 00:22:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:55.798 00:22:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:55.798 00:22:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:55.798 00:22:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:56.057 00:22:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:56.057 00:22:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:56.057 00:22:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:56.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:13:56.057 00:13:56.057 --- 10.0.0.2 ping statistics --- 00:13:56.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.057 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:56.057 00:22:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:56.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:56.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:13:56.057 00:13:56.057 --- 10.0.0.3 ping statistics --- 00:13:56.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.057 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:56.057 00:22:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:56.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:56.057 00:13:56.057 --- 10.0.0.1 ping statistics --- 00:13:56.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.057 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:56.057 00:22:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.057 00:22:43 -- nvmf/common.sh@421 -- # return 0 00:13:56.057 00:22:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:56.057 00:22:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.057 00:22:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:56.057 00:22:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:56.057 00:22:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.057 00:22:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:56.058 00:22:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:56.058 00:22:43 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:56.058 00:22:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:56.058 00:22:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:56.058 00:22:43 -- common/autotest_common.sh@10 -- # set +x 00:13:56.058 00:22:43 -- nvmf/common.sh@469 -- # nvmfpid=82051 00:13:56.058 00:22:43 -- nvmf/common.sh@470 -- # waitforlisten 82051 00:13:56.058 00:22:43 -- common/autotest_common.sh@819 -- # '[' -z 82051 ']' 00:13:56.058 00:22:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:56.058 00:22:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.058 00:22:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:56.058 00:22:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.058 00:22:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:56.058 00:22:43 -- common/autotest_common.sh@10 -- # set +x 00:13:56.058 [2024-07-13 00:22:43.144585] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:56.058 [2024-07-13 00:22:43.144727] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.315 [2024-07-13 00:22:43.289156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:56.315 [2024-07-13 00:22:43.389550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:56.315 [2024-07-13 00:22:43.389752] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.315 [2024-07-13 00:22:43.389769] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.315 [2024-07-13 00:22:43.389781] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.315 [2024-07-13 00:22:43.389948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.315 [2024-07-13 00:22:43.389962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.251 00:22:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:57.251 00:22:44 -- common/autotest_common.sh@852 -- # return 0 00:13:57.251 00:22:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:57.251 00:22:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:57.251 00:22:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.251 00:22:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.251 00:22:44 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.251 00:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.251 00:22:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.251 [2024-07-13 00:22:44.190899] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.251 00:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.251 00:22:44 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:57.251 00:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.251 00:22:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.251 00:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.251 00:22:44 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.251 00:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.251 00:22:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.251 [2024-07-13 00:22:44.207066] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.251 00:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.251 00:22:44 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:57.251 00:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.251 00:22:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.251 NULL1 00:13:57.251 00:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.251 00:22:44 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:57.251 00:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.251 00:22:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.251 Delay0 00:13:57.251 00:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.251 00:22:44 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.251 00:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.251 00:22:44 -- common/autotest_common.sh@10 -- # set +x 00:13:57.251 00:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.251 00:22:44 -- target/delete_subsystem.sh@28 -- # perf_pid=82104 00:13:57.251 00:22:44 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:57.251 00:22:44 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:57.252 [2024-07-13 00:22:44.401605] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:59.154 00:22:46 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.154 00:22:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.154 00:22:46 -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 starting I/O failed: -6 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 starting I/O failed: -6 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 starting I/O failed: -6 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 starting I/O failed: -6 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 starting I/O failed: -6 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 starting I/O failed: -6 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 starting I/O failed: -6 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 starting I/O failed: -6 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 starting I/O failed: -6 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 starting I/O failed: -6 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Write completed with error (sct=0, sc=8) 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.413 starting I/O failed: -6 00:13:59.413 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 [2024-07-13 00:22:46.434800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8e60 is same with the state(5) to be set 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 [2024-07-13 00:22:46.436487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb94e0 is same with the state(5) to be set 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 starting I/O failed: -6 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 starting I/O failed: -6 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 starting I/O failed: -6 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 starting I/O failed: -6 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 starting I/O failed: -6 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 starting I/O failed: -6 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 starting I/O failed: -6 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 starting I/O failed: -6 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 [2024-07-13 00:22:46.438038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4c70000c00 is same with the state(5) to be set 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 [2024-07-13 00:22:46.438448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4c7000c1d0 is same with the state(5) to be set 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 Read completed with error (sct=0, sc=8) 00:13:59.414 [2024-07-13 00:22:46.439913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4c7000c480 is same with the state(5) to be set 00:13:59.414 Write completed with error (sct=0, sc=8) 00:13:59.415 Write completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Write completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Write completed with error (sct=0, sc=8) 00:13:59.415 Write completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Write completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Write completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Write completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Write completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 Read completed with error (sct=0, sc=8) 00:13:59.415 [2024-07-13 00:22:46.440169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4c7000bf20 is same with the state(5) to be set 00:14:00.350 [2024-07-13 00:22:47.415434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbc460 is same with the state(5) to be set 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 [2024-07-13 00:22:47.439436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9230 is same with the state(5) to be set 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Write completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 Read completed with error (sct=0, sc=8) 00:14:00.350 [2024-07-13 00:22:47.439658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb9790 is same with the state(5) to be set 00:14:00.351 [2024-07-13 00:22:47.440603] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbc460 (9): Bad file descriptor 00:14:00.351 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:00.351 00:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.351 00:22:47 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:00.351 00:22:47 -- target/delete_subsystem.sh@35 -- # kill -0 82104 00:14:00.351 00:22:47 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:00.351 Initializing NVMe Controllers 00:14:00.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:00.351 Controller IO queue size 128, less than required. 00:14:00.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:00.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:00.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:00.351 Initialization complete. Launching workers. 00:14:00.351 ======================================================== 00:14:00.351 Latency(us) 00:14:00.351 Device Information : IOPS MiB/s Average min max 00:14:00.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.05 0.08 913060.30 1012.33 2005864.79 00:14:00.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 139.21 0.07 921205.83 425.66 1012982.00 00:14:00.351 ======================================================== 00:14:00.351 Total : 306.26 0.15 916762.81 425.66 2005864.79 00:14:00.351 00:14:00.918 00:22:47 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:00.918 00:22:47 -- target/delete_subsystem.sh@35 -- # kill -0 82104 00:14:00.918 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82104) - No such process 00:14:00.918 00:22:47 -- target/delete_subsystem.sh@45 -- # NOT wait 82104 00:14:00.918 00:22:47 -- common/autotest_common.sh@640 -- # local es=0 00:14:00.918 00:22:47 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 82104 00:14:00.918 00:22:47 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:00.918 00:22:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.918 00:22:47 -- common/autotest_common.sh@632 -- # type -t wait 00:14:00.918 00:22:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.918 00:22:47 -- common/autotest_common.sh@643 -- # wait 82104 00:14:00.918 00:22:47 -- common/autotest_common.sh@643 -- # es=1 00:14:00.918 00:22:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:00.918 00:22:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:00.918 00:22:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:00.918 00:22:47 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:00.918 00:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.918 00:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:00.918 00:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.918 00:22:47 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.918 00:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.918 00:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:00.918 [2024-07-13 00:22:47.964806] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.918 00:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.918 00:22:47 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.918 00:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.918 00:22:47 -- common/autotest_common.sh@10 -- # set +x 00:14:00.918 00:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.918 00:22:47 -- target/delete_subsystem.sh@54 -- # perf_pid=82155 00:14:00.918 00:22:47 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:00.918 00:22:47 -- target/delete_subsystem.sh@57 -- # kill -0 82155 00:14:00.918 00:22:47 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:00.918 00:22:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:00.918 [2024-07-13 00:22:48.134689] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:01.484 00:22:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:01.484 00:22:48 -- target/delete_subsystem.sh@57 -- # kill -0 82155 00:14:01.484 00:22:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:02.051 00:22:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:02.051 00:22:48 -- target/delete_subsystem.sh@57 -- # kill -0 82155 00:14:02.051 00:22:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:02.309 00:22:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:02.310 00:22:49 -- target/delete_subsystem.sh@57 -- # kill -0 82155 00:14:02.310 00:22:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:02.877 00:22:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:02.877 00:22:49 -- target/delete_subsystem.sh@57 -- # kill -0 82155 00:14:02.877 00:22:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.445 00:22:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.445 00:22:50 -- target/delete_subsystem.sh@57 -- # kill -0 82155 00:14:03.445 00:22:50 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:04.012 00:22:51 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:04.012 00:22:51 -- target/delete_subsystem.sh@57 -- # kill -0 82155 00:14:04.012 00:22:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:04.012 Initializing NVMe Controllers 00:14:04.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:04.012 Controller IO queue size 128, less than required. 00:14:04.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:04.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:04.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:04.012 Initialization complete. Launching workers. 00:14:04.012 ======================================================== 00:14:04.012 Latency(us) 00:14:04.012 Device Information : IOPS MiB/s Average min max 00:14:04.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003650.62 1000181.28 1041770.86 00:14:04.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005201.52 1000196.14 1013110.88 00:14:04.012 ======================================================== 00:14:04.012 Total : 256.00 0.12 1004426.07 1000181.28 1041770.86 00:14:04.012 00:14:04.579 00:22:51 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:04.579 00:22:51 -- target/delete_subsystem.sh@57 -- # kill -0 82155 00:14:04.579 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82155) - No such process 00:14:04.579 00:22:51 -- target/delete_subsystem.sh@67 -- # wait 82155 00:14:04.579 00:22:51 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:04.579 00:22:51 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:04.579 00:22:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:04.579 00:22:51 -- nvmf/common.sh@116 -- # sync 00:14:04.579 00:22:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:04.579 00:22:51 -- nvmf/common.sh@119 -- # set +e 00:14:04.579 00:22:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:04.579 00:22:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:04.579 rmmod nvme_tcp 00:14:04.579 rmmod nvme_fabrics 00:14:04.579 rmmod nvme_keyring 00:14:04.579 00:22:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:04.579 00:22:51 -- nvmf/common.sh@123 -- # set -e 00:14:04.579 00:22:51 -- nvmf/common.sh@124 -- # return 0 00:14:04.579 00:22:51 -- nvmf/common.sh@477 -- # '[' -n 82051 ']' 00:14:04.579 00:22:51 -- nvmf/common.sh@478 -- # killprocess 82051 00:14:04.579 00:22:51 -- common/autotest_common.sh@926 -- # '[' -z 82051 ']' 00:14:04.579 00:22:51 -- common/autotest_common.sh@930 -- # kill -0 82051 00:14:04.579 00:22:51 -- common/autotest_common.sh@931 -- # uname 00:14:04.579 00:22:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:04.579 00:22:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82051 00:14:04.579 00:22:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:04.579 killing process with pid 82051 00:14:04.579 00:22:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:04.579 00:22:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82051' 00:14:04.579 00:22:51 -- common/autotest_common.sh@945 -- # kill 82051 00:14:04.579 00:22:51 -- common/autotest_common.sh@950 -- # wait 82051 00:14:04.838 00:22:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:04.838 00:22:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:04.838 00:22:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:04.838 00:22:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.838 00:22:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:04.838 00:22:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.838 00:22:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.838 00:22:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.838 00:22:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:04.838 00:14:04.838 real 0m9.262s 00:14:04.838 user 0m27.884s 00:14:04.838 sys 0m1.359s 00:14:04.838 00:22:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.838 ************************************ 00:14:04.838 END TEST nvmf_delete_subsystem 00:14:04.838 00:22:51 -- common/autotest_common.sh@10 -- # set +x 00:14:04.838 ************************************ 00:14:04.838 00:22:51 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:04.838 00:22:51 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:04.838 00:22:51 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:04.838 00:22:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:04.838 00:22:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:04.838 00:22:51 -- common/autotest_common.sh@10 -- # set +x 00:14:04.838 ************************************ 00:14:04.838 START TEST nvmf_host_management 00:14:04.839 ************************************ 00:14:04.839 00:22:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:04.839 * Looking for test storage... 00:14:04.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:04.839 00:22:52 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:04.839 00:22:52 -- nvmf/common.sh@7 -- # uname -s 00:14:04.839 00:22:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.839 00:22:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.839 00:22:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.839 00:22:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.839 00:22:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.839 00:22:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.839 00:22:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.839 00:22:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.839 00:22:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.839 00:22:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.839 00:22:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:14:04.839 00:22:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:14:04.839 00:22:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.839 00:22:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.839 00:22:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:04.839 00:22:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.839 00:22:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.839 00:22:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.839 00:22:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.839 00:22:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.839 00:22:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.839 00:22:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.839 00:22:52 -- paths/export.sh@5 -- # export PATH 00:14:04.839 00:22:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.839 00:22:52 -- nvmf/common.sh@46 -- # : 0 00:14:04.839 00:22:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:04.839 00:22:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:04.839 00:22:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:04.839 00:22:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.839 00:22:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.839 00:22:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:04.839 00:22:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:04.839 00:22:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:04.839 00:22:52 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.839 00:22:52 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:04.839 00:22:52 -- target/host_management.sh@104 -- # nvmftestinit 00:14:04.839 00:22:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:04.839 00:22:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.839 00:22:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:04.839 00:22:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:04.839 00:22:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:04.839 00:22:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.839 00:22:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.839 00:22:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.839 00:22:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:04.839 00:22:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:04.839 00:22:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:04.839 00:22:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:04.839 00:22:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:04.839 00:22:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:04.839 00:22:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.839 00:22:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.839 00:22:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:04.839 00:22:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:04.839 00:22:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:04.839 00:22:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:04.839 00:22:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:04.839 00:22:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.839 00:22:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:04.839 00:22:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:04.839 00:22:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:04.839 00:22:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:04.839 00:22:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:05.098 00:22:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:05.098 Cannot find device "nvmf_tgt_br" 00:14:05.098 00:22:52 -- nvmf/common.sh@154 -- # true 00:14:05.098 00:22:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.098 Cannot find device "nvmf_tgt_br2" 00:14:05.098 00:22:52 -- nvmf/common.sh@155 -- # true 00:14:05.098 00:22:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:05.098 00:22:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:05.098 Cannot find device "nvmf_tgt_br" 00:14:05.098 00:22:52 -- nvmf/common.sh@157 -- # true 00:14:05.098 00:22:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:05.099 Cannot find device "nvmf_tgt_br2" 00:14:05.099 00:22:52 -- nvmf/common.sh@158 -- # true 00:14:05.099 00:22:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:05.099 00:22:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:05.099 00:22:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.099 00:22:52 -- nvmf/common.sh@161 -- # true 00:14:05.099 00:22:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.099 00:22:52 -- nvmf/common.sh@162 -- # true 00:14:05.099 00:22:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.099 00:22:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.099 00:22:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.099 00:22:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.099 00:22:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:05.099 00:22:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:05.099 00:22:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:05.099 00:22:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:05.099 00:22:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:05.099 00:22:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:05.099 00:22:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:05.099 00:22:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:05.099 00:22:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:05.099 00:22:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:05.358 00:22:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:05.358 00:22:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:05.358 00:22:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:05.358 00:22:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:05.358 00:22:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:05.358 00:22:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:05.358 00:22:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:05.358 00:22:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:05.358 00:22:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:05.358 00:22:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:05.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:05.358 00:14:05.358 --- 10.0.0.2 ping statistics --- 00:14:05.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.358 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:05.358 00:22:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:05.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:05.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:05.358 00:14:05.358 --- 10.0.0.3 ping statistics --- 00:14:05.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.358 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:05.358 00:22:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:05.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:05.358 00:14:05.358 --- 10.0.0.1 ping statistics --- 00:14:05.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.358 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:05.358 00:22:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.358 00:22:52 -- nvmf/common.sh@421 -- # return 0 00:14:05.358 00:22:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:05.358 00:22:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.358 00:22:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:05.358 00:22:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:05.358 00:22:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.358 00:22:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:05.358 00:22:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:05.358 00:22:52 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:05.358 00:22:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:05.358 00:22:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:05.358 00:22:52 -- common/autotest_common.sh@10 -- # set +x 00:14:05.358 ************************************ 00:14:05.358 START TEST nvmf_host_management 00:14:05.358 ************************************ 00:14:05.358 00:22:52 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:14:05.358 00:22:52 -- target/host_management.sh@69 -- # starttarget 00:14:05.358 00:22:52 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:05.358 00:22:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:05.358 00:22:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:05.358 00:22:52 -- common/autotest_common.sh@10 -- # set +x 00:14:05.358 00:22:52 -- nvmf/common.sh@469 -- # nvmfpid=82385 00:14:05.358 00:22:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:05.358 00:22:52 -- nvmf/common.sh@470 -- # waitforlisten 82385 00:14:05.358 00:22:52 -- common/autotest_common.sh@819 -- # '[' -z 82385 ']' 00:14:05.358 00:22:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.358 00:22:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:05.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.358 00:22:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.358 00:22:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:05.358 00:22:52 -- common/autotest_common.sh@10 -- # set +x 00:14:05.358 [2024-07-13 00:22:52.521914] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:05.358 [2024-07-13 00:22:52.522064] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.617 [2024-07-13 00:22:52.664525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.617 [2024-07-13 00:22:52.764456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:05.617 [2024-07-13 00:22:52.764966] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.617 [2024-07-13 00:22:52.765105] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.617 [2024-07-13 00:22:52.765246] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.617 [2024-07-13 00:22:52.765593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.617 [2024-07-13 00:22:52.765766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.617 [2024-07-13 00:22:52.766326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:05.617 [2024-07-13 00:22:52.766336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.551 00:22:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:06.551 00:22:53 -- common/autotest_common.sh@852 -- # return 0 00:14:06.551 00:22:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:06.551 00:22:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:06.551 00:22:53 -- common/autotest_common.sh@10 -- # set +x 00:14:06.551 00:22:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.551 00:22:53 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.551 00:22:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.551 00:22:53 -- common/autotest_common.sh@10 -- # set +x 00:14:06.551 [2024-07-13 00:22:53.560661] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.551 00:22:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.551 00:22:53 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:06.551 00:22:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:06.551 00:22:53 -- common/autotest_common.sh@10 -- # set +x 00:14:06.551 00:22:53 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:06.551 00:22:53 -- target/host_management.sh@23 -- # cat 00:14:06.551 00:22:53 -- target/host_management.sh@30 -- # rpc_cmd 00:14:06.551 00:22:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.551 00:22:53 -- common/autotest_common.sh@10 -- # set +x 00:14:06.551 Malloc0 00:14:06.551 [2024-07-13 00:22:53.648245] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.551 00:22:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.551 00:22:53 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:06.551 00:22:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:06.551 00:22:53 -- common/autotest_common.sh@10 -- # set +x 00:14:06.551 00:22:53 -- target/host_management.sh@73 -- # perfpid=82463 00:14:06.551 00:22:53 -- target/host_management.sh@74 -- # waitforlisten 82463 /var/tmp/bdevperf.sock 00:14:06.551 00:22:53 -- common/autotest_common.sh@819 -- # '[' -z 82463 ']' 00:14:06.551 00:22:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.551 00:22:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:06.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.551 00:22:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.551 00:22:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:06.551 00:22:53 -- common/autotest_common.sh@10 -- # set +x 00:14:06.551 00:22:53 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:06.551 00:22:53 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:06.551 00:22:53 -- nvmf/common.sh@520 -- # config=() 00:14:06.551 00:22:53 -- nvmf/common.sh@520 -- # local subsystem config 00:14:06.551 00:22:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:06.551 00:22:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:06.551 { 00:14:06.551 "params": { 00:14:06.551 "name": "Nvme$subsystem", 00:14:06.551 "trtype": "$TEST_TRANSPORT", 00:14:06.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.551 "adrfam": "ipv4", 00:14:06.551 "trsvcid": "$NVMF_PORT", 00:14:06.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.551 "hdgst": ${hdgst:-false}, 00:14:06.551 "ddgst": ${ddgst:-false} 00:14:06.551 }, 00:14:06.551 "method": "bdev_nvme_attach_controller" 00:14:06.551 } 00:14:06.551 EOF 00:14:06.551 )") 00:14:06.551 00:22:53 -- nvmf/common.sh@542 -- # cat 00:14:06.551 00:22:53 -- nvmf/common.sh@544 -- # jq . 00:14:06.551 00:22:53 -- nvmf/common.sh@545 -- # IFS=, 00:14:06.551 00:22:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:06.551 "params": { 00:14:06.551 "name": "Nvme0", 00:14:06.551 "trtype": "tcp", 00:14:06.551 "traddr": "10.0.0.2", 00:14:06.551 "adrfam": "ipv4", 00:14:06.551 "trsvcid": "4420", 00:14:06.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:06.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:06.551 "hdgst": false, 00:14:06.551 "ddgst": false 00:14:06.551 }, 00:14:06.551 "method": "bdev_nvme_attach_controller" 00:14:06.551 }' 00:14:06.551 [2024-07-13 00:22:53.747538] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:06.551 [2024-07-13 00:22:53.747650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82463 ] 00:14:06.810 [2024-07-13 00:22:53.888870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.810 [2024-07-13 00:22:53.991388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.069 Running I/O for 10 seconds... 00:14:07.639 00:22:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:07.639 00:22:54 -- common/autotest_common.sh@852 -- # return 0 00:14:07.639 00:22:54 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:07.639 00:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.639 00:22:54 -- common/autotest_common.sh@10 -- # set +x 00:14:07.639 00:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.639 00:22:54 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:07.639 00:22:54 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:07.639 00:22:54 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:07.639 00:22:54 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:07.639 00:22:54 -- target/host_management.sh@52 -- # local ret=1 00:14:07.639 00:22:54 -- target/host_management.sh@53 -- # local i 00:14:07.639 00:22:54 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:07.639 00:22:54 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:07.639 00:22:54 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:07.639 00:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.639 00:22:54 -- common/autotest_common.sh@10 -- # set +x 00:14:07.639 00:22:54 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:07.639 00:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.639 00:22:54 -- target/host_management.sh@55 -- # read_io_count=1821 00:14:07.639 00:22:54 -- target/host_management.sh@58 -- # '[' 1821 -ge 100 ']' 00:14:07.639 00:22:54 -- target/host_management.sh@59 -- # ret=0 00:14:07.639 00:22:54 -- target/host_management.sh@60 -- # break 00:14:07.639 00:22:54 -- target/host_management.sh@64 -- # return 0 00:14:07.639 00:22:54 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:07.639 00:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.639 00:22:54 -- common/autotest_common.sh@10 -- # set +x 00:14:07.639 [2024-07-13 00:22:54.803045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.639 [2024-07-13 00:22:54.803963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.640 [2024-07-13 00:22:54.803975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.640 [2024-07-13 00:22:54.803983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.640 [2024-07-13 00:22:54.803991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.640 [2024-07-13 00:22:54.804000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.640 [2024-07-13 00:22:54.804008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.640 [2024-07-13 00:22:54.804016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.640 [2024-07-13 00:22:54.804100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.640 [2024-07-13 00:22:54.804111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.640 [2024-07-13 00:22:54.804119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f12880 is same with the state(5) to be set 00:14:07.640 [2024-07-13 00:22:54.804229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.804976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.804992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.640 [2024-07-13 00:22:54.805453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.640 [2024-07-13 00:22:54.805466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.805984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.805998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.641 [2024-07-13 00:22:54.806390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.641 [2024-07-13 00:22:54.806405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16620c0 is same with the state(5) to be set 00:14:07.641 [2024-07-13 00:22:54.806489] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16620c0 was disconnected and freed. reset controller. 00:14:07.641 00:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.641 00:22:54 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:07.641 00:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.641 00:22:54 -- common/autotest_common.sh@10 -- # set +x 00:14:07.641 [2024-07-13 00:22:54.807902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:07.641 task offset: 118016 on job bdev=Nvme0n1 fails 00:14:07.641 00:14:07.641 Latency(us) 00:14:07.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.641 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:07.641 Job: Nvme0n1 ended in about 0.63 seconds with error 00:14:07.641 Verification LBA range: start 0x0 length 0x400 00:14:07.641 Nvme0n1 : 0.63 3141.46 196.34 101.59 0.00 19389.84 5183.30 26810.18 00:14:07.641 =================================================================================================================== 00:14:07.641 Total : 3141.46 196.34 101.59 0.00 19389.84 5183.30 26810.18 00:14:07.641 [2024-07-13 00:22:54.810731] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:07.641 [2024-07-13 00:22:54.810787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16646d0 (9): Bad file descriptor 00:14:07.641 00:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.641 00:22:54 -- target/host_management.sh@87 -- # sleep 1 00:14:07.641 [2024-07-13 00:22:54.817408] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:08.633 00:22:55 -- target/host_management.sh@91 -- # kill -9 82463 00:14:08.634 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82463) - No such process 00:14:08.634 00:22:55 -- target/host_management.sh@91 -- # true 00:14:08.634 00:22:55 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:08.634 00:22:55 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:08.634 00:22:55 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:08.634 00:22:55 -- nvmf/common.sh@520 -- # config=() 00:14:08.634 00:22:55 -- nvmf/common.sh@520 -- # local subsystem config 00:14:08.634 00:22:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:08.634 00:22:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:08.634 { 00:14:08.634 "params": { 00:14:08.634 "name": "Nvme$subsystem", 00:14:08.634 "trtype": "$TEST_TRANSPORT", 00:14:08.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.634 "adrfam": "ipv4", 00:14:08.634 "trsvcid": "$NVMF_PORT", 00:14:08.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.634 "hdgst": ${hdgst:-false}, 00:14:08.634 "ddgst": ${ddgst:-false} 00:14:08.634 }, 00:14:08.634 "method": "bdev_nvme_attach_controller" 00:14:08.634 } 00:14:08.634 EOF 00:14:08.634 )") 00:14:08.634 00:22:55 -- nvmf/common.sh@542 -- # cat 00:14:08.634 00:22:55 -- nvmf/common.sh@544 -- # jq . 00:14:08.634 00:22:55 -- nvmf/common.sh@545 -- # IFS=, 00:14:08.634 00:22:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:08.634 "params": { 00:14:08.634 "name": "Nvme0", 00:14:08.634 "trtype": "tcp", 00:14:08.634 "traddr": "10.0.0.2", 00:14:08.634 "adrfam": "ipv4", 00:14:08.634 "trsvcid": "4420", 00:14:08.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:08.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:08.634 "hdgst": false, 00:14:08.634 "ddgst": false 00:14:08.634 }, 00:14:08.634 "method": "bdev_nvme_attach_controller" 00:14:08.634 }' 00:14:08.892 [2024-07-13 00:22:55.875725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:08.892 [2024-07-13 00:22:55.875841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82512 ] 00:14:08.892 [2024-07-13 00:22:56.014326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.892 [2024-07-13 00:22:56.122171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.152 Running I/O for 1 seconds... 00:14:10.528 00:14:10.528 Latency(us) 00:14:10.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.528 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:10.528 Verification LBA range: start 0x0 length 0x400 00:14:10.528 Nvme0n1 : 1.01 3542.86 221.43 0.00 0.00 17750.07 763.35 23831.27 00:14:10.529 =================================================================================================================== 00:14:10.529 Total : 3542.86 221.43 0.00 0.00 17750.07 763.35 23831.27 00:14:10.529 00:22:57 -- target/host_management.sh@101 -- # stoptarget 00:14:10.529 00:22:57 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:10.529 00:22:57 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:10.529 00:22:57 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:10.529 00:22:57 -- target/host_management.sh@40 -- # nvmftestfini 00:14:10.529 00:22:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:10.529 00:22:57 -- nvmf/common.sh@116 -- # sync 00:14:10.529 00:22:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:10.529 00:22:57 -- nvmf/common.sh@119 -- # set +e 00:14:10.529 00:22:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:10.529 00:22:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:10.529 rmmod nvme_tcp 00:14:10.529 rmmod nvme_fabrics 00:14:10.529 rmmod nvme_keyring 00:14:10.529 00:22:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:10.529 00:22:57 -- nvmf/common.sh@123 -- # set -e 00:14:10.529 00:22:57 -- nvmf/common.sh@124 -- # return 0 00:14:10.529 00:22:57 -- nvmf/common.sh@477 -- # '[' -n 82385 ']' 00:14:10.529 00:22:57 -- nvmf/common.sh@478 -- # killprocess 82385 00:14:10.529 00:22:57 -- common/autotest_common.sh@926 -- # '[' -z 82385 ']' 00:14:10.529 00:22:57 -- common/autotest_common.sh@930 -- # kill -0 82385 00:14:10.529 00:22:57 -- common/autotest_common.sh@931 -- # uname 00:14:10.529 00:22:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:10.529 00:22:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82385 00:14:10.529 killing process with pid 82385 00:14:10.529 00:22:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:10.529 00:22:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:10.529 00:22:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82385' 00:14:10.529 00:22:57 -- common/autotest_common.sh@945 -- # kill 82385 00:14:10.529 00:22:57 -- common/autotest_common.sh@950 -- # wait 82385 00:14:10.787 [2024-07-13 00:22:57.907195] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:10.787 00:22:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:10.787 00:22:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:10.787 00:22:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:10.787 00:22:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.787 00:22:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:10.787 00:22:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.787 00:22:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.787 00:22:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.787 00:22:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:10.787 00:14:10.787 real 0m5.521s 00:14:10.787 user 0m23.062s 00:14:10.787 sys 0m1.281s 00:14:10.787 00:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.787 ************************************ 00:14:10.787 END TEST nvmf_host_management 00:14:10.787 ************************************ 00:14:10.787 00:22:57 -- common/autotest_common.sh@10 -- # set +x 00:14:11.046 00:22:58 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:11.046 ************************************ 00:14:11.046 END TEST nvmf_host_management 00:14:11.046 ************************************ 00:14:11.046 00:14:11.046 real 0m6.070s 00:14:11.046 user 0m23.195s 00:14:11.046 sys 0m1.521s 00:14:11.046 00:22:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.046 00:22:58 -- common/autotest_common.sh@10 -- # set +x 00:14:11.046 00:22:58 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:11.046 00:22:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:11.046 00:22:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:11.046 00:22:58 -- common/autotest_common.sh@10 -- # set +x 00:14:11.046 ************************************ 00:14:11.046 START TEST nvmf_lvol 00:14:11.046 ************************************ 00:14:11.046 00:22:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:11.046 * Looking for test storage... 00:14:11.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:11.046 00:22:58 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:11.046 00:22:58 -- nvmf/common.sh@7 -- # uname -s 00:14:11.046 00:22:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.046 00:22:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.046 00:22:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.046 00:22:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.046 00:22:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.046 00:22:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.046 00:22:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.046 00:22:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.046 00:22:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.046 00:22:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.046 00:22:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:14:11.046 00:22:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:14:11.046 00:22:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.046 00:22:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.046 00:22:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:11.046 00:22:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:11.046 00:22:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.046 00:22:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.046 00:22:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.046 00:22:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.046 00:22:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.046 00:22:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.046 00:22:58 -- paths/export.sh@5 -- # export PATH 00:14:11.046 00:22:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.046 00:22:58 -- nvmf/common.sh@46 -- # : 0 00:14:11.046 00:22:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:11.046 00:22:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:11.046 00:22:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:11.046 00:22:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.046 00:22:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.046 00:22:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:11.046 00:22:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:11.046 00:22:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:11.046 00:22:58 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.046 00:22:58 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.046 00:22:58 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:11.046 00:22:58 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:11.046 00:22:58 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:11.046 00:22:58 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:11.046 00:22:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:11.046 00:22:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.046 00:22:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:11.046 00:22:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:11.046 00:22:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:11.046 00:22:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.046 00:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.046 00:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.046 00:22:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:11.046 00:22:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:11.046 00:22:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:11.046 00:22:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:11.046 00:22:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:11.046 00:22:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:11.046 00:22:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.046 00:22:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.046 00:22:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:11.046 00:22:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:11.046 00:22:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:11.046 00:22:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:11.046 00:22:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:11.046 00:22:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.046 00:22:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:11.046 00:22:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:11.046 00:22:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:11.046 00:22:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:11.046 00:22:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:11.046 00:22:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:11.046 Cannot find device "nvmf_tgt_br" 00:14:11.046 00:22:58 -- nvmf/common.sh@154 -- # true 00:14:11.046 00:22:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.046 Cannot find device "nvmf_tgt_br2" 00:14:11.046 00:22:58 -- nvmf/common.sh@155 -- # true 00:14:11.046 00:22:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:11.047 00:22:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:11.047 Cannot find device "nvmf_tgt_br" 00:14:11.047 00:22:58 -- nvmf/common.sh@157 -- # true 00:14:11.047 00:22:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:11.047 Cannot find device "nvmf_tgt_br2" 00:14:11.047 00:22:58 -- nvmf/common.sh@158 -- # true 00:14:11.047 00:22:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:11.305 00:22:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:11.305 00:22:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:11.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.305 00:22:58 -- nvmf/common.sh@161 -- # true 00:14:11.305 00:22:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:11.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.305 00:22:58 -- nvmf/common.sh@162 -- # true 00:14:11.305 00:22:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:11.305 00:22:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:11.305 00:22:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:11.305 00:22:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:11.305 00:22:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:11.305 00:22:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:11.305 00:22:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:11.305 00:22:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:11.305 00:22:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:11.305 00:22:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:11.305 00:22:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:11.305 00:22:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:11.305 00:22:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:11.305 00:22:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:11.305 00:22:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:11.305 00:22:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:11.305 00:22:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:11.305 00:22:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:11.305 00:22:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:11.305 00:22:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:11.305 00:22:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:11.305 00:22:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:11.305 00:22:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:11.305 00:22:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:11.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:14:11.305 00:14:11.305 --- 10.0.0.2 ping statistics --- 00:14:11.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.305 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:14:11.305 00:22:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:11.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:11.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:11.305 00:14:11.305 --- 10.0.0.3 ping statistics --- 00:14:11.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.305 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:11.305 00:22:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:11.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:11.305 00:14:11.305 --- 10.0.0.1 ping statistics --- 00:14:11.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.305 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:11.305 00:22:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.305 00:22:58 -- nvmf/common.sh@421 -- # return 0 00:14:11.305 00:22:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:11.305 00:22:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.305 00:22:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:11.305 00:22:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:11.305 00:22:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.305 00:22:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:11.305 00:22:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:11.305 00:22:58 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:11.305 00:22:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:11.305 00:22:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:11.305 00:22:58 -- common/autotest_common.sh@10 -- # set +x 00:14:11.563 00:22:58 -- nvmf/common.sh@469 -- # nvmfpid=82738 00:14:11.563 00:22:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:11.563 00:22:58 -- nvmf/common.sh@470 -- # waitforlisten 82738 00:14:11.563 00:22:58 -- common/autotest_common.sh@819 -- # '[' -z 82738 ']' 00:14:11.563 00:22:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.563 00:22:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:11.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.563 00:22:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.563 00:22:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:11.563 00:22:58 -- common/autotest_common.sh@10 -- # set +x 00:14:11.563 [2024-07-13 00:22:58.591953] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:11.563 [2024-07-13 00:22:58.592067] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.563 [2024-07-13 00:22:58.736467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.821 [2024-07-13 00:22:58.833694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:11.821 [2024-07-13 00:22:58.833892] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.821 [2024-07-13 00:22:58.833905] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.821 [2024-07-13 00:22:58.833914] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.821 [2024-07-13 00:22:58.834059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.821 [2024-07-13 00:22:58.834188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.821 [2024-07-13 00:22:58.834194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.387 00:22:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:12.387 00:22:59 -- common/autotest_common.sh@852 -- # return 0 00:14:12.387 00:22:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:12.387 00:22:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:12.387 00:22:59 -- common/autotest_common.sh@10 -- # set +x 00:14:12.387 00:22:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.387 00:22:59 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:12.645 [2024-07-13 00:22:59.796394] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.645 00:22:59 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:12.903 00:23:00 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:12.903 00:23:00 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:13.470 00:23:00 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:13.470 00:23:00 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:13.470 00:23:00 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:14.036 00:23:00 -- target/nvmf_lvol.sh@29 -- # lvs=676ff895-3ed0-4392-bb22-4e75388de6af 00:14:14.036 00:23:00 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 676ff895-3ed0-4392-bb22-4e75388de6af lvol 20 00:14:14.036 00:23:01 -- target/nvmf_lvol.sh@32 -- # lvol=c64d0608-4554-4abc-8625-e046e385af0e 00:14:14.036 00:23:01 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:14.615 00:23:01 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c64d0608-4554-4abc-8625-e046e385af0e 00:14:14.615 00:23:01 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:14.873 [2024-07-13 00:23:02.021746] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.873 00:23:02 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:15.131 00:23:02 -- target/nvmf_lvol.sh@42 -- # perf_pid=82886 00:14:15.131 00:23:02 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:15.131 00:23:02 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:16.504 00:23:03 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot c64d0608-4554-4abc-8625-e046e385af0e MY_SNAPSHOT 00:14:16.504 00:23:03 -- target/nvmf_lvol.sh@47 -- # snapshot=50a1666d-8eee-4a1d-b687-e0f2800eee37 00:14:16.504 00:23:03 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize c64d0608-4554-4abc-8625-e046e385af0e 30 00:14:16.762 00:23:03 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 50a1666d-8eee-4a1d-b687-e0f2800eee37 MY_CLONE 00:14:17.019 00:23:04 -- target/nvmf_lvol.sh@49 -- # clone=4beab6a0-5edd-4553-82b5-56366bae686f 00:14:17.019 00:23:04 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 4beab6a0-5edd-4553-82b5-56366bae686f 00:14:17.953 00:23:04 -- target/nvmf_lvol.sh@53 -- # wait 82886 00:14:26.064 Initializing NVMe Controllers 00:14:26.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:26.064 Controller IO queue size 128, less than required. 00:14:26.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:26.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:26.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:26.064 Initialization complete. Launching workers. 00:14:26.064 ======================================================== 00:14:26.064 Latency(us) 00:14:26.064 Device Information : IOPS MiB/s Average min max 00:14:26.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10301.90 40.24 12432.45 1836.87 76078.55 00:14:26.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10229.70 39.96 12517.08 3448.56 83207.83 00:14:26.064 ======================================================== 00:14:26.064 Total : 20531.60 80.20 12474.62 1836.87 83207.83 00:14:26.064 00:14:26.064 00:23:12 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:26.064 00:23:12 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c64d0608-4554-4abc-8625-e046e385af0e 00:14:26.064 00:23:13 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 676ff895-3ed0-4392-bb22-4e75388de6af 00:14:26.323 00:23:13 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:26.323 00:23:13 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:26.323 00:23:13 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:26.323 00:23:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:26.323 00:23:13 -- nvmf/common.sh@116 -- # sync 00:14:26.323 00:23:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:26.323 00:23:13 -- nvmf/common.sh@119 -- # set +e 00:14:26.323 00:23:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:26.323 00:23:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:26.323 rmmod nvme_tcp 00:14:26.323 rmmod nvme_fabrics 00:14:26.323 rmmod nvme_keyring 00:14:26.323 00:23:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:26.323 00:23:13 -- nvmf/common.sh@123 -- # set -e 00:14:26.323 00:23:13 -- nvmf/common.sh@124 -- # return 0 00:14:26.323 00:23:13 -- nvmf/common.sh@477 -- # '[' -n 82738 ']' 00:14:26.323 00:23:13 -- nvmf/common.sh@478 -- # killprocess 82738 00:14:26.323 00:23:13 -- common/autotest_common.sh@926 -- # '[' -z 82738 ']' 00:14:26.323 00:23:13 -- common/autotest_common.sh@930 -- # kill -0 82738 00:14:26.323 00:23:13 -- common/autotest_common.sh@931 -- # uname 00:14:26.323 00:23:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:26.323 00:23:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82738 00:14:26.323 00:23:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:26.323 killing process with pid 82738 00:14:26.323 00:23:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:26.323 00:23:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82738' 00:14:26.323 00:23:13 -- common/autotest_common.sh@945 -- # kill 82738 00:14:26.323 00:23:13 -- common/autotest_common.sh@950 -- # wait 82738 00:14:26.581 00:23:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:26.581 00:23:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:26.581 00:23:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:26.581 00:23:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.581 00:23:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:26.581 00:23:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.581 00:23:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.581 00:23:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.581 00:23:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:26.581 00:14:26.581 real 0m15.732s 00:14:26.581 user 1m6.024s 00:14:26.581 sys 0m3.664s 00:14:26.581 00:23:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.581 ************************************ 00:14:26.581 00:23:13 -- common/autotest_common.sh@10 -- # set +x 00:14:26.581 END TEST nvmf_lvol 00:14:26.581 ************************************ 00:14:26.839 00:23:13 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:26.839 00:23:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:26.839 00:23:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:26.839 00:23:13 -- common/autotest_common.sh@10 -- # set +x 00:14:26.839 ************************************ 00:14:26.839 START TEST nvmf_lvs_grow 00:14:26.839 ************************************ 00:14:26.839 00:23:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:26.839 * Looking for test storage... 00:14:26.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:26.839 00:23:13 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:26.839 00:23:13 -- nvmf/common.sh@7 -- # uname -s 00:14:26.839 00:23:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.839 00:23:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.839 00:23:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.839 00:23:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.839 00:23:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.839 00:23:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.839 00:23:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.839 00:23:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.839 00:23:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.839 00:23:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.839 00:23:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:14:26.839 00:23:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:14:26.839 00:23:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.839 00:23:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.839 00:23:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:26.839 00:23:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:26.839 00:23:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.839 00:23:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.839 00:23:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.839 00:23:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.839 00:23:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.839 00:23:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.839 00:23:13 -- paths/export.sh@5 -- # export PATH 00:14:26.839 00:23:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.839 00:23:13 -- nvmf/common.sh@46 -- # : 0 00:14:26.839 00:23:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:26.839 00:23:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:26.839 00:23:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:26.839 00:23:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.839 00:23:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.839 00:23:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:26.839 00:23:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:26.839 00:23:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:26.839 00:23:13 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:26.839 00:23:13 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:26.839 00:23:13 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:26.839 00:23:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:26.839 00:23:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.839 00:23:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:26.839 00:23:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:26.839 00:23:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:26.839 00:23:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.839 00:23:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.839 00:23:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.839 00:23:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:26.839 00:23:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:26.839 00:23:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:26.839 00:23:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:26.839 00:23:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:26.839 00:23:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:26.839 00:23:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.839 00:23:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.839 00:23:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:26.839 00:23:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:26.839 00:23:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:26.839 00:23:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:26.839 00:23:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:26.839 00:23:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.839 00:23:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:26.839 00:23:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:26.839 00:23:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:26.839 00:23:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:26.839 00:23:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:26.839 00:23:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:26.839 Cannot find device "nvmf_tgt_br" 00:14:26.839 00:23:13 -- nvmf/common.sh@154 -- # true 00:14:26.839 00:23:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.839 Cannot find device "nvmf_tgt_br2" 00:14:26.839 00:23:14 -- nvmf/common.sh@155 -- # true 00:14:26.839 00:23:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:26.839 00:23:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:26.839 Cannot find device "nvmf_tgt_br" 00:14:26.839 00:23:14 -- nvmf/common.sh@157 -- # true 00:14:26.839 00:23:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:26.839 Cannot find device "nvmf_tgt_br2" 00:14:26.839 00:23:14 -- nvmf/common.sh@158 -- # true 00:14:26.839 00:23:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:27.097 00:23:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:27.097 00:23:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:27.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.097 00:23:14 -- nvmf/common.sh@161 -- # true 00:14:27.097 00:23:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:27.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.097 00:23:14 -- nvmf/common.sh@162 -- # true 00:14:27.097 00:23:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:27.097 00:23:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:27.097 00:23:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:27.097 00:23:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:27.097 00:23:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:27.097 00:23:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:27.097 00:23:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:27.097 00:23:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:27.097 00:23:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:27.097 00:23:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:27.097 00:23:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:27.097 00:23:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:27.097 00:23:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:27.097 00:23:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:27.097 00:23:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:27.097 00:23:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:27.097 00:23:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:27.097 00:23:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:27.097 00:23:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:27.097 00:23:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:27.097 00:23:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:27.097 00:23:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:27.097 00:23:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:27.097 00:23:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:27.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:14:27.097 00:14:27.097 --- 10.0.0.2 ping statistics --- 00:14:27.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.097 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:27.097 00:23:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:27.097 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:27.097 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:14:27.097 00:14:27.097 --- 10.0.0.3 ping statistics --- 00:14:27.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.097 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:27.097 00:23:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:27.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:14:27.097 00:14:27.097 --- 10.0.0.1 ping statistics --- 00:14:27.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.097 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:27.097 00:23:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.097 00:23:14 -- nvmf/common.sh@421 -- # return 0 00:14:27.097 00:23:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:27.097 00:23:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.097 00:23:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:27.097 00:23:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:27.097 00:23:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.097 00:23:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:27.097 00:23:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:27.355 00:23:14 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:27.355 00:23:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:27.355 00:23:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:27.355 00:23:14 -- common/autotest_common.sh@10 -- # set +x 00:14:27.355 00:23:14 -- nvmf/common.sh@469 -- # nvmfpid=83248 00:14:27.355 00:23:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:27.355 00:23:14 -- nvmf/common.sh@470 -- # waitforlisten 83248 00:14:27.355 00:23:14 -- common/autotest_common.sh@819 -- # '[' -z 83248 ']' 00:14:27.355 00:23:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.355 00:23:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:27.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.355 00:23:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.355 00:23:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:27.355 00:23:14 -- common/autotest_common.sh@10 -- # set +x 00:14:27.355 [2024-07-13 00:23:14.406386] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:27.355 [2024-07-13 00:23:14.406504] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.355 [2024-07-13 00:23:14.551438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.612 [2024-07-13 00:23:14.651845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:27.612 [2024-07-13 00:23:14.652038] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.612 [2024-07-13 00:23:14.652054] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.612 [2024-07-13 00:23:14.652065] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.612 [2024-07-13 00:23:14.652097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.178 00:23:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:28.178 00:23:15 -- common/autotest_common.sh@852 -- # return 0 00:14:28.178 00:23:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:28.178 00:23:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:28.178 00:23:15 -- common/autotest_common.sh@10 -- # set +x 00:14:28.178 00:23:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.178 00:23:15 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:28.436 [2024-07-13 00:23:15.634527] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.436 00:23:15 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:28.436 00:23:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:28.436 00:23:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:28.436 00:23:15 -- common/autotest_common.sh@10 -- # set +x 00:14:28.436 ************************************ 00:14:28.436 START TEST lvs_grow_clean 00:14:28.436 ************************************ 00:14:28.436 00:23:15 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:28.436 00:23:15 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:28.436 00:23:15 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:28.436 00:23:15 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:28.436 00:23:15 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:28.695 00:23:15 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:28.696 00:23:15 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:28.696 00:23:15 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:28.696 00:23:15 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:28.696 00:23:15 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:28.696 00:23:15 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:28.696 00:23:15 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:29.262 00:23:16 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e094fa87-6863-4dd8-b229-16b55187a371 00:14:29.262 00:23:16 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e094fa87-6863-4dd8-b229-16b55187a371 00:14:29.262 00:23:16 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:29.262 00:23:16 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:29.262 00:23:16 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:29.262 00:23:16 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e094fa87-6863-4dd8-b229-16b55187a371 lvol 150 00:14:29.520 00:23:16 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9a24cd7d-6ac9-4008-b550-6de45b83bb0e 00:14:29.520 00:23:16 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:29.520 00:23:16 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:29.779 [2024-07-13 00:23:16.864319] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:29.779 [2024-07-13 00:23:16.864428] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:29.779 true 00:14:29.779 00:23:16 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e094fa87-6863-4dd8-b229-16b55187a371 00:14:29.779 00:23:16 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:30.037 00:23:17 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:30.037 00:23:17 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:30.297 00:23:17 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9a24cd7d-6ac9-4008-b550-6de45b83bb0e 00:14:30.297 00:23:17 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:30.556 [2024-07-13 00:23:17.761019] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.556 00:23:17 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:30.816 00:23:17 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83404 00:14:30.816 00:23:17 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:30.816 00:23:17 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:30.816 00:23:17 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83404 /var/tmp/bdevperf.sock 00:14:30.816 00:23:17 -- common/autotest_common.sh@819 -- # '[' -z 83404 ']' 00:14:30.816 00:23:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:30.816 00:23:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:30.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:30.816 00:23:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:30.816 00:23:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:30.816 00:23:17 -- common/autotest_common.sh@10 -- # set +x 00:14:30.816 [2024-07-13 00:23:18.030029] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:30.816 [2024-07-13 00:23:18.030157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83404 ] 00:14:31.075 [2024-07-13 00:23:18.162307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.075 [2024-07-13 00:23:18.289573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.012 00:23:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:32.012 00:23:18 -- common/autotest_common.sh@852 -- # return 0 00:14:32.012 00:23:18 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:32.012 Nvme0n1 00:14:32.272 00:23:19 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:32.272 [ 00:14:32.272 { 00:14:32.272 "aliases": [ 00:14:32.272 "9a24cd7d-6ac9-4008-b550-6de45b83bb0e" 00:14:32.272 ], 00:14:32.272 "assigned_rate_limits": { 00:14:32.272 "r_mbytes_per_sec": 0, 00:14:32.272 "rw_ios_per_sec": 0, 00:14:32.272 "rw_mbytes_per_sec": 0, 00:14:32.272 "w_mbytes_per_sec": 0 00:14:32.272 }, 00:14:32.272 "block_size": 4096, 00:14:32.272 "claimed": false, 00:14:32.272 "driver_specific": { 00:14:32.272 "mp_policy": "active_passive", 00:14:32.272 "nvme": [ 00:14:32.272 { 00:14:32.272 "ctrlr_data": { 00:14:32.272 "ana_reporting": false, 00:14:32.272 "cntlid": 1, 00:14:32.272 "firmware_revision": "24.01.1", 00:14:32.272 "model_number": "SPDK bdev Controller", 00:14:32.272 "multi_ctrlr": true, 00:14:32.272 "oacs": { 00:14:32.272 "firmware": 0, 00:14:32.272 "format": 0, 00:14:32.272 "ns_manage": 0, 00:14:32.272 "security": 0 00:14:32.272 }, 00:14:32.272 "serial_number": "SPDK0", 00:14:32.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:32.272 "vendor_id": "0x8086" 00:14:32.272 }, 00:14:32.272 "ns_data": { 00:14:32.272 "can_share": true, 00:14:32.272 "id": 1 00:14:32.272 }, 00:14:32.272 "trid": { 00:14:32.272 "adrfam": "IPv4", 00:14:32.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:32.272 "traddr": "10.0.0.2", 00:14:32.272 "trsvcid": "4420", 00:14:32.272 "trtype": "TCP" 00:14:32.272 }, 00:14:32.272 "vs": { 00:14:32.272 "nvme_version": "1.3" 00:14:32.272 } 00:14:32.272 } 00:14:32.272 ] 00:14:32.272 }, 00:14:32.272 "name": "Nvme0n1", 00:14:32.272 "num_blocks": 38912, 00:14:32.272 "product_name": "NVMe disk", 00:14:32.272 "supported_io_types": { 00:14:32.272 "abort": true, 00:14:32.272 "compare": true, 00:14:32.272 "compare_and_write": true, 00:14:32.272 "flush": true, 00:14:32.272 "nvme_admin": true, 00:14:32.272 "nvme_io": true, 00:14:32.272 "read": true, 00:14:32.272 "reset": true, 00:14:32.272 "unmap": true, 00:14:32.272 "write": true, 00:14:32.272 "write_zeroes": true 00:14:32.272 }, 00:14:32.272 "uuid": "9a24cd7d-6ac9-4008-b550-6de45b83bb0e", 00:14:32.272 "zoned": false 00:14:32.272 } 00:14:32.272 ] 00:14:32.272 00:23:19 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83457 00:14:32.272 00:23:19 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:32.272 00:23:19 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:32.531 Running I/O for 10 seconds... 00:14:33.468 Latency(us) 00:14:33.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.468 Nvme0n1 : 1.00 7520.00 29.38 0.00 0.00 0.00 0.00 0.00 00:14:33.468 =================================================================================================================== 00:14:33.468 Total : 7520.00 29.38 0.00 0.00 0.00 0.00 0.00 00:14:33.468 00:14:34.406 00:23:21 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e094fa87-6863-4dd8-b229-16b55187a371 00:14:34.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.406 Nvme0n1 : 2.00 7547.50 29.48 0.00 0.00 0.00 0.00 0.00 00:14:34.406 =================================================================================================================== 00:14:34.406 Total : 7547.50 29.48 0.00 0.00 0.00 0.00 0.00 00:14:34.406 00:14:34.665 true 00:14:34.665 00:23:21 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e094fa87-6863-4dd8-b229-16b55187a371 00:14:34.665 00:23:21 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:34.924 00:23:22 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:34.924 00:23:22 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:34.924 00:23:22 -- target/nvmf_lvs_grow.sh@65 -- # wait 83457 00:14:35.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.493 Nvme0n1 : 3.00 7568.33 29.56 0.00 0.00 0.00 0.00 0.00 00:14:35.493 =================================================================================================================== 00:14:35.493 Total : 7568.33 29.56 0.00 0.00 0.00 0.00 0.00 00:14:35.493 00:14:36.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.426 Nvme0n1 : 4.00 7531.75 29.42 0.00 0.00 0.00 0.00 0.00 00:14:36.426 =================================================================================================================== 00:14:36.426 Total : 7531.75 29.42 0.00 0.00 0.00 0.00 0.00 00:14:36.426 00:14:37.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.359 Nvme0n1 : 5.00 7583.60 29.62 0.00 0.00 0.00 0.00 0.00 00:14:37.359 =================================================================================================================== 00:14:37.359 Total : 7583.60 29.62 0.00 0.00 0.00 0.00 0.00 00:14:37.359 00:14:38.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.730 Nvme0n1 : 6.00 7669.50 29.96 0.00 0.00 0.00 0.00 0.00 00:14:38.730 =================================================================================================================== 00:14:38.730 Total : 7669.50 29.96 0.00 0.00 0.00 0.00 0.00 00:14:38.730 00:14:39.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.664 Nvme0n1 : 7.00 7860.57 30.71 0.00 0.00 0.00 0.00 0.00 00:14:39.664 =================================================================================================================== 00:14:39.664 Total : 7860.57 30.71 0.00 0.00 0.00 0.00 0.00 00:14:39.664 00:14:40.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.598 Nvme0n1 : 8.00 7829.62 30.58 0.00 0.00 0.00 0.00 0.00 00:14:40.598 =================================================================================================================== 00:14:40.598 Total : 7829.62 30.58 0.00 0.00 0.00 0.00 0.00 00:14:40.598 00:14:41.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.587 Nvme0n1 : 9.00 7814.89 30.53 0.00 0.00 0.00 0.00 0.00 00:14:41.587 =================================================================================================================== 00:14:41.587 Total : 7814.89 30.53 0.00 0.00 0.00 0.00 0.00 00:14:41.587 00:14:42.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.523 Nvme0n1 : 10.00 7769.80 30.35 0.00 0.00 0.00 0.00 0.00 00:14:42.523 =================================================================================================================== 00:14:42.523 Total : 7769.80 30.35 0.00 0.00 0.00 0.00 0.00 00:14:42.523 00:14:42.523 00:14:42.523 Latency(us) 00:14:42.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.523 Nvme0n1 : 10.01 7774.77 30.37 0.00 0.00 16459.76 6404.65 40989.79 00:14:42.523 =================================================================================================================== 00:14:42.523 Total : 7774.77 30.37 0.00 0.00 16459.76 6404.65 40989.79 00:14:42.523 0 00:14:42.523 00:23:29 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83404 00:14:42.523 00:23:29 -- common/autotest_common.sh@926 -- # '[' -z 83404 ']' 00:14:42.523 00:23:29 -- common/autotest_common.sh@930 -- # kill -0 83404 00:14:42.523 00:23:29 -- common/autotest_common.sh@931 -- # uname 00:14:42.523 00:23:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:42.523 00:23:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83404 00:14:42.523 killing process with pid 83404 00:14:42.523 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.523 00:14:42.523 Latency(us) 00:14:42.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.523 =================================================================================================================== 00:14:42.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:42.523 00:23:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:42.523 00:23:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:42.523 00:23:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83404' 00:14:42.523 00:23:29 -- common/autotest_common.sh@945 -- # kill 83404 00:14:42.523 00:23:29 -- common/autotest_common.sh@950 -- # wait 83404 00:14:42.781 00:23:29 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:43.039 00:23:30 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e094fa87-6863-4dd8-b229-16b55187a371 00:14:43.039 00:23:30 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:43.298 00:23:30 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:43.298 00:23:30 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:43.298 00:23:30 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:43.556 [2024-07-13 00:23:30.647310] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:43.556 00:23:30 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e094fa87-6863-4dd8-b229-16b55187a371 00:14:43.556 00:23:30 -- common/autotest_common.sh@640 -- # local es=0 00:14:43.556 00:23:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e094fa87-6863-4dd8-b229-16b55187a371 00:14:43.556 00:23:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.556 00:23:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:43.556 00:23:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.556 00:23:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:43.556 00:23:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.556 00:23:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:43.556 00:23:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.556 00:23:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:43.556 00:23:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e094fa87-6863-4dd8-b229-16b55187a371 00:14:43.813 2024/07/13 00:23:30 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:e094fa87-6863-4dd8-b229-16b55187a371], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:43.814 request: 00:14:43.814 { 00:14:43.814 "method": "bdev_lvol_get_lvstores", 00:14:43.814 "params": { 00:14:43.814 "uuid": "e094fa87-6863-4dd8-b229-16b55187a371" 00:14:43.814 } 00:14:43.814 } 00:14:43.814 Got JSON-RPC error response 00:14:43.814 GoRPCClient: error on JSON-RPC call 00:14:43.814 00:23:30 -- common/autotest_common.sh@643 -- # es=1 00:14:43.814 00:23:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:43.814 00:23:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:43.814 00:23:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:43.814 00:23:30 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:44.071 aio_bdev 00:14:44.071 00:23:31 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9a24cd7d-6ac9-4008-b550-6de45b83bb0e 00:14:44.071 00:23:31 -- common/autotest_common.sh@887 -- # local bdev_name=9a24cd7d-6ac9-4008-b550-6de45b83bb0e 00:14:44.071 00:23:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:44.071 00:23:31 -- common/autotest_common.sh@889 -- # local i 00:14:44.071 00:23:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:44.071 00:23:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:44.071 00:23:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:44.329 00:23:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a24cd7d-6ac9-4008-b550-6de45b83bb0e -t 2000 00:14:44.587 [ 00:14:44.587 { 00:14:44.587 "aliases": [ 00:14:44.587 "lvs/lvol" 00:14:44.587 ], 00:14:44.587 "assigned_rate_limits": { 00:14:44.587 "r_mbytes_per_sec": 0, 00:14:44.587 "rw_ios_per_sec": 0, 00:14:44.587 "rw_mbytes_per_sec": 0, 00:14:44.587 "w_mbytes_per_sec": 0 00:14:44.587 }, 00:14:44.587 "block_size": 4096, 00:14:44.587 "claimed": false, 00:14:44.587 "driver_specific": { 00:14:44.587 "lvol": { 00:14:44.587 "base_bdev": "aio_bdev", 00:14:44.587 "clone": false, 00:14:44.587 "esnap_clone": false, 00:14:44.587 "lvol_store_uuid": "e094fa87-6863-4dd8-b229-16b55187a371", 00:14:44.587 "snapshot": false, 00:14:44.587 "thin_provision": false 00:14:44.587 } 00:14:44.587 }, 00:14:44.587 "name": "9a24cd7d-6ac9-4008-b550-6de45b83bb0e", 00:14:44.587 "num_blocks": 38912, 00:14:44.587 "product_name": "Logical Volume", 00:14:44.587 "supported_io_types": { 00:14:44.587 "abort": false, 00:14:44.587 "compare": false, 00:14:44.587 "compare_and_write": false, 00:14:44.587 "flush": false, 00:14:44.587 "nvme_admin": false, 00:14:44.587 "nvme_io": false, 00:14:44.587 "read": true, 00:14:44.587 "reset": true, 00:14:44.587 "unmap": true, 00:14:44.587 "write": true, 00:14:44.587 "write_zeroes": true 00:14:44.587 }, 00:14:44.587 "uuid": "9a24cd7d-6ac9-4008-b550-6de45b83bb0e", 00:14:44.587 "zoned": false 00:14:44.587 } 00:14:44.587 ] 00:14:44.587 00:23:31 -- common/autotest_common.sh@895 -- # return 0 00:14:44.587 00:23:31 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e094fa87-6863-4dd8-b229-16b55187a371 00:14:44.587 00:23:31 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:44.846 00:23:31 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:44.846 00:23:31 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e094fa87-6863-4dd8-b229-16b55187a371 00:14:44.846 00:23:31 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:45.104 00:23:32 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:45.104 00:23:32 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9a24cd7d-6ac9-4008-b550-6de45b83bb0e 00:14:45.362 00:23:32 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e094fa87-6863-4dd8-b229-16b55187a371 00:14:45.620 00:23:32 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:45.877 00:23:32 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:46.134 00:14:46.134 real 0m17.632s 00:14:46.134 user 0m16.757s 00:14:46.134 sys 0m2.294s 00:14:46.134 ************************************ 00:14:46.134 END TEST lvs_grow_clean 00:14:46.134 ************************************ 00:14:46.134 00:23:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.134 00:23:33 -- common/autotest_common.sh@10 -- # set +x 00:14:46.134 00:23:33 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:46.134 00:23:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:46.134 00:23:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:46.134 00:23:33 -- common/autotest_common.sh@10 -- # set +x 00:14:46.134 ************************************ 00:14:46.134 START TEST lvs_grow_dirty 00:14:46.134 ************************************ 00:14:46.134 00:23:33 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:46.134 00:23:33 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:46.134 00:23:33 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:46.134 00:23:33 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:46.134 00:23:33 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:46.134 00:23:33 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:46.134 00:23:33 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:46.134 00:23:33 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:46.134 00:23:33 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:46.392 00:23:33 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:46.650 00:23:33 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:46.650 00:23:33 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:46.908 00:23:33 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4abc8e28-e6d6-4f93-a142-96112271797c 00:14:46.908 00:23:33 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:14:46.908 00:23:33 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:47.166 00:23:34 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:47.166 00:23:34 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:47.166 00:23:34 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4abc8e28-e6d6-4f93-a142-96112271797c lvol 150 00:14:47.424 00:23:34 -- target/nvmf_lvs_grow.sh@33 -- # lvol=2ac6b7a2-8d27-4511-a3d7-461857902851 00:14:47.424 00:23:34 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:47.424 00:23:34 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:47.682 [2024-07-13 00:23:34.669585] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:47.682 [2024-07-13 00:23:34.669723] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:47.682 true 00:14:47.682 00:23:34 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:14:47.682 00:23:34 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:47.940 00:23:34 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:47.940 00:23:34 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:47.940 00:23:35 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2ac6b7a2-8d27-4511-a3d7-461857902851 00:14:48.198 00:23:35 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:48.455 00:23:35 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:48.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.713 00:23:35 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:48.713 00:23:35 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83839 00:14:48.713 00:23:35 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:48.713 00:23:35 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83839 /var/tmp/bdevperf.sock 00:14:48.713 00:23:35 -- common/autotest_common.sh@819 -- # '[' -z 83839 ']' 00:14:48.713 00:23:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.713 00:23:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:48.713 00:23:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.713 00:23:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:48.713 00:23:35 -- common/autotest_common.sh@10 -- # set +x 00:14:48.713 [2024-07-13 00:23:35.868881] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:48.713 [2024-07-13 00:23:35.868985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83839 ] 00:14:48.970 [2024-07-13 00:23:35.999126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.970 [2024-07-13 00:23:36.085671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.902 00:23:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:49.902 00:23:36 -- common/autotest_common.sh@852 -- # return 0 00:14:49.902 00:23:36 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:49.902 Nvme0n1 00:14:49.902 00:23:37 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:50.160 [ 00:14:50.160 { 00:14:50.160 "aliases": [ 00:14:50.160 "2ac6b7a2-8d27-4511-a3d7-461857902851" 00:14:50.160 ], 00:14:50.160 "assigned_rate_limits": { 00:14:50.160 "r_mbytes_per_sec": 0, 00:14:50.160 "rw_ios_per_sec": 0, 00:14:50.160 "rw_mbytes_per_sec": 0, 00:14:50.160 "w_mbytes_per_sec": 0 00:14:50.160 }, 00:14:50.160 "block_size": 4096, 00:14:50.160 "claimed": false, 00:14:50.160 "driver_specific": { 00:14:50.160 "mp_policy": "active_passive", 00:14:50.160 "nvme": [ 00:14:50.160 { 00:14:50.160 "ctrlr_data": { 00:14:50.160 "ana_reporting": false, 00:14:50.160 "cntlid": 1, 00:14:50.160 "firmware_revision": "24.01.1", 00:14:50.160 "model_number": "SPDK bdev Controller", 00:14:50.160 "multi_ctrlr": true, 00:14:50.160 "oacs": { 00:14:50.160 "firmware": 0, 00:14:50.160 "format": 0, 00:14:50.160 "ns_manage": 0, 00:14:50.160 "security": 0 00:14:50.160 }, 00:14:50.160 "serial_number": "SPDK0", 00:14:50.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:50.160 "vendor_id": "0x8086" 00:14:50.160 }, 00:14:50.160 "ns_data": { 00:14:50.160 "can_share": true, 00:14:50.160 "id": 1 00:14:50.160 }, 00:14:50.160 "trid": { 00:14:50.160 "adrfam": "IPv4", 00:14:50.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:50.160 "traddr": "10.0.0.2", 00:14:50.160 "trsvcid": "4420", 00:14:50.160 "trtype": "TCP" 00:14:50.160 }, 00:14:50.160 "vs": { 00:14:50.160 "nvme_version": "1.3" 00:14:50.160 } 00:14:50.160 } 00:14:50.160 ] 00:14:50.160 }, 00:14:50.160 "name": "Nvme0n1", 00:14:50.160 "num_blocks": 38912, 00:14:50.160 "product_name": "NVMe disk", 00:14:50.160 "supported_io_types": { 00:14:50.160 "abort": true, 00:14:50.160 "compare": true, 00:14:50.160 "compare_and_write": true, 00:14:50.160 "flush": true, 00:14:50.160 "nvme_admin": true, 00:14:50.160 "nvme_io": true, 00:14:50.160 "read": true, 00:14:50.160 "reset": true, 00:14:50.160 "unmap": true, 00:14:50.160 "write": true, 00:14:50.160 "write_zeroes": true 00:14:50.160 }, 00:14:50.160 "uuid": "2ac6b7a2-8d27-4511-a3d7-461857902851", 00:14:50.160 "zoned": false 00:14:50.160 } 00:14:50.160 ] 00:14:50.160 00:23:37 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83881 00:14:50.160 00:23:37 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:50.160 00:23:37 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:50.419 Running I/O for 10 seconds... 00:14:51.354 Latency(us) 00:14:51.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.354 Nvme0n1 : 1.00 9502.00 37.12 0.00 0.00 0.00 0.00 0.00 00:14:51.354 =================================================================================================================== 00:14:51.354 Total : 9502.00 37.12 0.00 0.00 0.00 0.00 0.00 00:14:51.354 00:14:52.289 00:23:39 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:14:52.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.289 Nvme0n1 : 2.00 9441.50 36.88 0.00 0.00 0.00 0.00 0.00 00:14:52.289 =================================================================================================================== 00:14:52.289 Total : 9441.50 36.88 0.00 0.00 0.00 0.00 0.00 00:14:52.289 00:14:52.548 true 00:14:52.548 00:23:39 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:14:52.548 00:23:39 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:52.807 00:23:39 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:52.807 00:23:39 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:52.807 00:23:39 -- target/nvmf_lvs_grow.sh@65 -- # wait 83881 00:14:53.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.375 Nvme0n1 : 3.00 9288.33 36.28 0.00 0.00 0.00 0.00 0.00 00:14:53.375 =================================================================================================================== 00:14:53.375 Total : 9288.33 36.28 0.00 0.00 0.00 0.00 0.00 00:14:53.375 00:14:54.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.310 Nvme0n1 : 4.00 9311.25 36.37 0.00 0.00 0.00 0.00 0.00 00:14:54.310 =================================================================================================================== 00:14:54.310 Total : 9311.25 36.37 0.00 0.00 0.00 0.00 0.00 00:14:54.310 00:14:55.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.247 Nvme0n1 : 5.00 9325.80 36.43 0.00 0.00 0.00 0.00 0.00 00:14:55.247 =================================================================================================================== 00:14:55.247 Total : 9325.80 36.43 0.00 0.00 0.00 0.00 0.00 00:14:55.247 00:14:56.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.631 Nvme0n1 : 6.00 9334.50 36.46 0.00 0.00 0.00 0.00 0.00 00:14:56.631 =================================================================================================================== 00:14:56.631 Total : 9334.50 36.46 0.00 0.00 0.00 0.00 0.00 00:14:56.631 00:14:57.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.564 Nvme0n1 : 7.00 9324.00 36.42 0.00 0.00 0.00 0.00 0.00 00:14:57.564 =================================================================================================================== 00:14:57.564 Total : 9324.00 36.42 0.00 0.00 0.00 0.00 0.00 00:14:57.564 00:14:58.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.499 Nvme0n1 : 8.00 9107.38 35.58 0.00 0.00 0.00 0.00 0.00 00:14:58.499 =================================================================================================================== 00:14:58.499 Total : 9107.38 35.58 0.00 0.00 0.00 0.00 0.00 00:14:58.499 00:14:59.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.455 Nvme0n1 : 9.00 9075.78 35.45 0.00 0.00 0.00 0.00 0.00 00:14:59.455 =================================================================================================================== 00:14:59.455 Total : 9075.78 35.45 0.00 0.00 0.00 0.00 0.00 00:14:59.455 00:15:00.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.405 Nvme0n1 : 10.00 9035.40 35.29 0.00 0.00 0.00 0.00 0.00 00:15:00.405 =================================================================================================================== 00:15:00.405 Total : 9035.40 35.29 0.00 0.00 0.00 0.00 0.00 00:15:00.405 00:15:00.405 00:15:00.405 Latency(us) 00:15:00.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.405 Nvme0n1 : 10.01 9036.30 35.30 0.00 0.00 14155.93 5332.25 145847.39 00:15:00.405 =================================================================================================================== 00:15:00.405 Total : 9036.30 35.30 0.00 0.00 14155.93 5332.25 145847.39 00:15:00.405 0 00:15:00.405 00:23:47 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83839 00:15:00.405 00:23:47 -- common/autotest_common.sh@926 -- # '[' -z 83839 ']' 00:15:00.405 00:23:47 -- common/autotest_common.sh@930 -- # kill -0 83839 00:15:00.405 00:23:47 -- common/autotest_common.sh@931 -- # uname 00:15:00.405 00:23:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:00.405 00:23:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83839 00:15:00.405 killing process with pid 83839 00:15:00.405 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.405 00:15:00.405 Latency(us) 00:15:00.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.405 =================================================================================================================== 00:15:00.405 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.405 00:23:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:00.405 00:23:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:00.405 00:23:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83839' 00:15:00.405 00:23:47 -- common/autotest_common.sh@945 -- # kill 83839 00:15:00.405 00:23:47 -- common/autotest_common.sh@950 -- # wait 83839 00:15:00.664 00:23:47 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:00.922 00:23:48 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:15:00.922 00:23:48 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:01.180 00:23:48 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:01.180 00:23:48 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:01.180 00:23:48 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83248 00:15:01.180 00:23:48 -- target/nvmf_lvs_grow.sh@74 -- # wait 83248 00:15:01.180 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83248 Killed "${NVMF_APP[@]}" "$@" 00:15:01.180 00:23:48 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:01.180 00:23:48 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:01.180 00:23:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:01.180 00:23:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:01.180 00:23:48 -- common/autotest_common.sh@10 -- # set +x 00:15:01.180 00:23:48 -- nvmf/common.sh@469 -- # nvmfpid=84039 00:15:01.180 00:23:48 -- nvmf/common.sh@470 -- # waitforlisten 84039 00:15:01.180 00:23:48 -- common/autotest_common.sh@819 -- # '[' -z 84039 ']' 00:15:01.180 00:23:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:01.180 00:23:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.180 00:23:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:01.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.180 00:23:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.180 00:23:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:01.180 00:23:48 -- common/autotest_common.sh@10 -- # set +x 00:15:01.438 [2024-07-13 00:23:48.422075] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:01.439 [2024-07-13 00:23:48.422183] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.439 [2024-07-13 00:23:48.560808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.697 [2024-07-13 00:23:48.668875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:01.697 [2024-07-13 00:23:48.669048] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.697 [2024-07-13 00:23:48.669061] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.697 [2024-07-13 00:23:48.669071] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.697 [2024-07-13 00:23:48.669105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.265 00:23:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:02.265 00:23:49 -- common/autotest_common.sh@852 -- # return 0 00:15:02.265 00:23:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:02.265 00:23:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:02.265 00:23:49 -- common/autotest_common.sh@10 -- # set +x 00:15:02.265 00:23:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.265 00:23:49 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:02.523 [2024-07-13 00:23:49.578196] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:02.523 [2024-07-13 00:23:49.578464] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:02.523 [2024-07-13 00:23:49.578693] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:02.523 00:23:49 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:02.523 00:23:49 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 2ac6b7a2-8d27-4511-a3d7-461857902851 00:15:02.523 00:23:49 -- common/autotest_common.sh@887 -- # local bdev_name=2ac6b7a2-8d27-4511-a3d7-461857902851 00:15:02.523 00:23:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:02.523 00:23:49 -- common/autotest_common.sh@889 -- # local i 00:15:02.523 00:23:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:02.523 00:23:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:02.523 00:23:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:02.782 00:23:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ac6b7a2-8d27-4511-a3d7-461857902851 -t 2000 00:15:03.040 [ 00:15:03.040 { 00:15:03.040 "aliases": [ 00:15:03.040 "lvs/lvol" 00:15:03.040 ], 00:15:03.040 "assigned_rate_limits": { 00:15:03.040 "r_mbytes_per_sec": 0, 00:15:03.040 "rw_ios_per_sec": 0, 00:15:03.040 "rw_mbytes_per_sec": 0, 00:15:03.040 "w_mbytes_per_sec": 0 00:15:03.040 }, 00:15:03.041 "block_size": 4096, 00:15:03.041 "claimed": false, 00:15:03.041 "driver_specific": { 00:15:03.041 "lvol": { 00:15:03.041 "base_bdev": "aio_bdev", 00:15:03.041 "clone": false, 00:15:03.041 "esnap_clone": false, 00:15:03.041 "lvol_store_uuid": "4abc8e28-e6d6-4f93-a142-96112271797c", 00:15:03.041 "snapshot": false, 00:15:03.041 "thin_provision": false 00:15:03.041 } 00:15:03.041 }, 00:15:03.041 "name": "2ac6b7a2-8d27-4511-a3d7-461857902851", 00:15:03.041 "num_blocks": 38912, 00:15:03.041 "product_name": "Logical Volume", 00:15:03.041 "supported_io_types": { 00:15:03.041 "abort": false, 00:15:03.041 "compare": false, 00:15:03.041 "compare_and_write": false, 00:15:03.041 "flush": false, 00:15:03.041 "nvme_admin": false, 00:15:03.041 "nvme_io": false, 00:15:03.041 "read": true, 00:15:03.041 "reset": true, 00:15:03.041 "unmap": true, 00:15:03.041 "write": true, 00:15:03.041 "write_zeroes": true 00:15:03.041 }, 00:15:03.041 "uuid": "2ac6b7a2-8d27-4511-a3d7-461857902851", 00:15:03.041 "zoned": false 00:15:03.041 } 00:15:03.041 ] 00:15:03.041 00:23:50 -- common/autotest_common.sh@895 -- # return 0 00:15:03.041 00:23:50 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:15:03.041 00:23:50 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:03.299 00:23:50 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:03.299 00:23:50 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:15:03.299 00:23:50 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:03.558 00:23:50 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:03.558 00:23:50 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:03.816 [2024-07-13 00:23:50.871435] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:03.816 00:23:50 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:15:03.816 00:23:50 -- common/autotest_common.sh@640 -- # local es=0 00:15:03.816 00:23:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:15:03.816 00:23:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:03.816 00:23:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:03.816 00:23:50 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:03.816 00:23:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:03.816 00:23:50 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:03.816 00:23:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:03.816 00:23:50 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:03.816 00:23:50 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:03.816 00:23:50 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:15:04.074 2024/07/13 00:23:51 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:4abc8e28-e6d6-4f93-a142-96112271797c], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:04.074 request: 00:15:04.074 { 00:15:04.074 "method": "bdev_lvol_get_lvstores", 00:15:04.074 "params": { 00:15:04.074 "uuid": "4abc8e28-e6d6-4f93-a142-96112271797c" 00:15:04.074 } 00:15:04.074 } 00:15:04.074 Got JSON-RPC error response 00:15:04.074 GoRPCClient: error on JSON-RPC call 00:15:04.074 00:23:51 -- common/autotest_common.sh@643 -- # es=1 00:15:04.074 00:23:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:04.074 00:23:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:04.074 00:23:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:04.074 00:23:51 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:04.332 aio_bdev 00:15:04.333 00:23:51 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 2ac6b7a2-8d27-4511-a3d7-461857902851 00:15:04.333 00:23:51 -- common/autotest_common.sh@887 -- # local bdev_name=2ac6b7a2-8d27-4511-a3d7-461857902851 00:15:04.333 00:23:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:04.333 00:23:51 -- common/autotest_common.sh@889 -- # local i 00:15:04.333 00:23:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:04.333 00:23:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:04.333 00:23:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:04.591 00:23:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ac6b7a2-8d27-4511-a3d7-461857902851 -t 2000 00:15:04.591 [ 00:15:04.591 { 00:15:04.591 "aliases": [ 00:15:04.591 "lvs/lvol" 00:15:04.591 ], 00:15:04.591 "assigned_rate_limits": { 00:15:04.591 "r_mbytes_per_sec": 0, 00:15:04.591 "rw_ios_per_sec": 0, 00:15:04.591 "rw_mbytes_per_sec": 0, 00:15:04.591 "w_mbytes_per_sec": 0 00:15:04.591 }, 00:15:04.591 "block_size": 4096, 00:15:04.591 "claimed": false, 00:15:04.591 "driver_specific": { 00:15:04.591 "lvol": { 00:15:04.591 "base_bdev": "aio_bdev", 00:15:04.591 "clone": false, 00:15:04.591 "esnap_clone": false, 00:15:04.591 "lvol_store_uuid": "4abc8e28-e6d6-4f93-a142-96112271797c", 00:15:04.591 "snapshot": false, 00:15:04.591 "thin_provision": false 00:15:04.591 } 00:15:04.591 }, 00:15:04.591 "name": "2ac6b7a2-8d27-4511-a3d7-461857902851", 00:15:04.591 "num_blocks": 38912, 00:15:04.591 "product_name": "Logical Volume", 00:15:04.591 "supported_io_types": { 00:15:04.591 "abort": false, 00:15:04.591 "compare": false, 00:15:04.591 "compare_and_write": false, 00:15:04.591 "flush": false, 00:15:04.591 "nvme_admin": false, 00:15:04.591 "nvme_io": false, 00:15:04.591 "read": true, 00:15:04.591 "reset": true, 00:15:04.591 "unmap": true, 00:15:04.591 "write": true, 00:15:04.591 "write_zeroes": true 00:15:04.591 }, 00:15:04.591 "uuid": "2ac6b7a2-8d27-4511-a3d7-461857902851", 00:15:04.591 "zoned": false 00:15:04.591 } 00:15:04.591 ] 00:15:04.591 00:23:51 -- common/autotest_common.sh@895 -- # return 0 00:15:04.591 00:23:51 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:15:04.591 00:23:51 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:05.158 00:23:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:05.158 00:23:52 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:15:05.158 00:23:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:05.158 00:23:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:05.158 00:23:52 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2ac6b7a2-8d27-4511-a3d7-461857902851 00:15:05.415 00:23:52 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4abc8e28-e6d6-4f93-a142-96112271797c 00:15:05.673 00:23:52 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:05.931 00:23:52 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:06.188 00:15:06.189 real 0m19.972s 00:15:06.189 user 0m40.799s 00:15:06.189 sys 0m8.319s 00:15:06.189 00:23:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.189 00:23:53 -- common/autotest_common.sh@10 -- # set +x 00:15:06.189 ************************************ 00:15:06.189 END TEST lvs_grow_dirty 00:15:06.189 ************************************ 00:15:06.189 00:23:53 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:06.189 00:23:53 -- common/autotest_common.sh@796 -- # type=--id 00:15:06.189 00:23:53 -- common/autotest_common.sh@797 -- # id=0 00:15:06.189 00:23:53 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:15:06.189 00:23:53 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:06.189 00:23:53 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:15:06.189 00:23:53 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:15:06.189 00:23:53 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:15:06.189 00:23:53 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:06.189 nvmf_trace.0 00:15:06.189 00:23:53 -- common/autotest_common.sh@811 -- # return 0 00:15:06.189 00:23:53 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:06.189 00:23:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:06.189 00:23:53 -- nvmf/common.sh@116 -- # sync 00:15:06.446 00:23:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:06.446 00:23:53 -- nvmf/common.sh@119 -- # set +e 00:15:06.446 00:23:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:06.446 00:23:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:06.446 rmmod nvme_tcp 00:15:06.446 rmmod nvme_fabrics 00:15:06.446 rmmod nvme_keyring 00:15:06.704 00:23:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:06.704 00:23:53 -- nvmf/common.sh@123 -- # set -e 00:15:06.704 00:23:53 -- nvmf/common.sh@124 -- # return 0 00:15:06.704 00:23:53 -- nvmf/common.sh@477 -- # '[' -n 84039 ']' 00:15:06.704 00:23:53 -- nvmf/common.sh@478 -- # killprocess 84039 00:15:06.704 00:23:53 -- common/autotest_common.sh@926 -- # '[' -z 84039 ']' 00:15:06.704 00:23:53 -- common/autotest_common.sh@930 -- # kill -0 84039 00:15:06.704 00:23:53 -- common/autotest_common.sh@931 -- # uname 00:15:06.704 00:23:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:06.704 00:23:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84039 00:15:06.704 00:23:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:06.704 00:23:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:06.704 00:23:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84039' 00:15:06.704 killing process with pid 84039 00:15:06.704 00:23:53 -- common/autotest_common.sh@945 -- # kill 84039 00:15:06.704 00:23:53 -- common/autotest_common.sh@950 -- # wait 84039 00:15:06.963 00:23:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:06.963 00:23:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:06.963 00:23:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:06.963 00:23:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.963 00:23:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:06.963 00:23:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.963 00:23:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.963 00:23:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.963 00:23:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:06.963 00:15:06.963 real 0m40.205s 00:15:06.963 user 1m3.670s 00:15:06.963 sys 0m11.458s 00:15:06.963 00:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.963 00:23:54 -- common/autotest_common.sh@10 -- # set +x 00:15:06.963 ************************************ 00:15:06.963 END TEST nvmf_lvs_grow 00:15:06.963 ************************************ 00:15:06.963 00:23:54 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:06.963 00:23:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:06.963 00:23:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:06.963 00:23:54 -- common/autotest_common.sh@10 -- # set +x 00:15:06.963 ************************************ 00:15:06.963 START TEST nvmf_bdev_io_wait 00:15:06.963 ************************************ 00:15:06.963 00:23:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:07.222 * Looking for test storage... 00:15:07.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:07.222 00:23:54 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:07.222 00:23:54 -- nvmf/common.sh@7 -- # uname -s 00:15:07.222 00:23:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.222 00:23:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.222 00:23:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.222 00:23:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.222 00:23:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.222 00:23:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.222 00:23:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.222 00:23:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.222 00:23:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.222 00:23:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.222 00:23:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:15:07.222 00:23:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:15:07.222 00:23:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.222 00:23:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.222 00:23:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:07.222 00:23:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:07.222 00:23:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.222 00:23:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.222 00:23:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.222 00:23:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.222 00:23:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.222 00:23:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.222 00:23:54 -- paths/export.sh@5 -- # export PATH 00:15:07.222 00:23:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.222 00:23:54 -- nvmf/common.sh@46 -- # : 0 00:15:07.222 00:23:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:07.222 00:23:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:07.222 00:23:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:07.222 00:23:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.222 00:23:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.223 00:23:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:07.223 00:23:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:07.223 00:23:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:07.223 00:23:54 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.223 00:23:54 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.223 00:23:54 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:07.223 00:23:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:07.223 00:23:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.223 00:23:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:07.223 00:23:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:07.223 00:23:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:07.223 00:23:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.223 00:23:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.223 00:23:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.223 00:23:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:07.223 00:23:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:07.223 00:23:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:07.223 00:23:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:07.223 00:23:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:07.223 00:23:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:07.223 00:23:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.223 00:23:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:07.223 00:23:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:07.223 00:23:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:07.223 00:23:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:07.223 00:23:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:07.223 00:23:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:07.223 00:23:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.223 00:23:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:07.223 00:23:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:07.223 00:23:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:07.223 00:23:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:07.223 00:23:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:07.223 00:23:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:07.223 Cannot find device "nvmf_tgt_br" 00:15:07.223 00:23:54 -- nvmf/common.sh@154 -- # true 00:15:07.223 00:23:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:07.223 Cannot find device "nvmf_tgt_br2" 00:15:07.223 00:23:54 -- nvmf/common.sh@155 -- # true 00:15:07.223 00:23:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:07.223 00:23:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:07.223 Cannot find device "nvmf_tgt_br" 00:15:07.223 00:23:54 -- nvmf/common.sh@157 -- # true 00:15:07.223 00:23:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:07.223 Cannot find device "nvmf_tgt_br2" 00:15:07.223 00:23:54 -- nvmf/common.sh@158 -- # true 00:15:07.223 00:23:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:07.223 00:23:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:07.223 00:23:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:07.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.223 00:23:54 -- nvmf/common.sh@161 -- # true 00:15:07.223 00:23:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:07.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:07.223 00:23:54 -- nvmf/common.sh@162 -- # true 00:15:07.223 00:23:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:07.223 00:23:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:07.223 00:23:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:07.223 00:23:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:07.223 00:23:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:07.223 00:23:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:07.223 00:23:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:07.223 00:23:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:07.223 00:23:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:07.481 00:23:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:07.481 00:23:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:07.481 00:23:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:07.481 00:23:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:07.481 00:23:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:07.481 00:23:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:07.481 00:23:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:07.481 00:23:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:07.481 00:23:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:07.481 00:23:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:07.481 00:23:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:07.481 00:23:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:07.481 00:23:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:07.481 00:23:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:07.481 00:23:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:07.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:15:07.481 00:15:07.481 --- 10.0.0.2 ping statistics --- 00:15:07.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.481 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:15:07.481 00:23:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:07.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:07.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:07.481 00:15:07.481 --- 10.0.0.3 ping statistics --- 00:15:07.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.481 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:07.481 00:23:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:07.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:07.481 00:15:07.481 --- 10.0.0.1 ping statistics --- 00:15:07.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.481 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:07.481 00:23:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.481 00:23:54 -- nvmf/common.sh@421 -- # return 0 00:15:07.481 00:23:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:07.481 00:23:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.481 00:23:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:07.481 00:23:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:07.481 00:23:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.481 00:23:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:07.481 00:23:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:07.481 00:23:54 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:07.481 00:23:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:07.481 00:23:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:07.481 00:23:54 -- common/autotest_common.sh@10 -- # set +x 00:15:07.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.481 00:23:54 -- nvmf/common.sh@469 -- # nvmfpid=84454 00:15:07.481 00:23:54 -- nvmf/common.sh@470 -- # waitforlisten 84454 00:15:07.481 00:23:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:07.481 00:23:54 -- common/autotest_common.sh@819 -- # '[' -z 84454 ']' 00:15:07.481 00:23:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.481 00:23:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:07.481 00:23:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.481 00:23:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:07.481 00:23:54 -- common/autotest_common.sh@10 -- # set +x 00:15:07.482 [2024-07-13 00:23:54.676564] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:07.482 [2024-07-13 00:23:54.677000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.740 [2024-07-13 00:23:54.819209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:07.740 [2024-07-13 00:23:54.914645] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:07.740 [2024-07-13 00:23:54.915046] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.740 [2024-07-13 00:23:54.915435] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.740 [2024-07-13 00:23:54.915757] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.740 [2024-07-13 00:23:54.916075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.740 [2024-07-13 00:23:54.916226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.740 [2024-07-13 00:23:54.916373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:07.740 [2024-07-13 00:23:54.916377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.690 00:23:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:08.690 00:23:55 -- common/autotest_common.sh@852 -- # return 0 00:15:08.690 00:23:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:08.690 00:23:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:08.690 00:23:55 -- common/autotest_common.sh@10 -- # set +x 00:15:08.690 00:23:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.690 00:23:55 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:08.690 00:23:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.690 00:23:55 -- common/autotest_common.sh@10 -- # set +x 00:15:08.690 00:23:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:08.691 00:23:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.691 00:23:55 -- common/autotest_common.sh@10 -- # set +x 00:15:08.691 00:23:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:08.691 00:23:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.691 00:23:55 -- common/autotest_common.sh@10 -- # set +x 00:15:08.691 [2024-07-13 00:23:55.749213] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.691 00:23:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:08.691 00:23:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.691 00:23:55 -- common/autotest_common.sh@10 -- # set +x 00:15:08.691 Malloc0 00:15:08.691 00:23:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:08.691 00:23:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.691 00:23:55 -- common/autotest_common.sh@10 -- # set +x 00:15:08.691 00:23:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:08.691 00:23:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.691 00:23:55 -- common/autotest_common.sh@10 -- # set +x 00:15:08.691 00:23:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.691 00:23:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.691 00:23:55 -- common/autotest_common.sh@10 -- # set +x 00:15:08.691 [2024-07-13 00:23:55.812136] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.691 00:23:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84508 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@30 -- # READ_PID=84510 00:15:08.691 00:23:55 -- nvmf/common.sh@520 -- # config=() 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:08.691 00:23:55 -- nvmf/common.sh@520 -- # local subsystem config 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:08.691 00:23:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84512 00:15:08.691 00:23:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:08.691 { 00:15:08.691 "params": { 00:15:08.691 "name": "Nvme$subsystem", 00:15:08.691 "trtype": "$TEST_TRANSPORT", 00:15:08.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:08.691 "adrfam": "ipv4", 00:15:08.691 "trsvcid": "$NVMF_PORT", 00:15:08.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:08.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:08.691 "hdgst": ${hdgst:-false}, 00:15:08.691 "ddgst": ${ddgst:-false} 00:15:08.691 }, 00:15:08.691 "method": "bdev_nvme_attach_controller" 00:15:08.691 } 00:15:08.691 EOF 00:15:08.691 )") 00:15:08.691 00:23:55 -- nvmf/common.sh@520 -- # config=() 00:15:08.691 00:23:55 -- nvmf/common.sh@520 -- # local subsystem config 00:15:08.691 00:23:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:08.691 00:23:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:08.691 { 00:15:08.691 "params": { 00:15:08.691 "name": "Nvme$subsystem", 00:15:08.691 "trtype": "$TEST_TRANSPORT", 00:15:08.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:08.691 "adrfam": "ipv4", 00:15:08.691 "trsvcid": "$NVMF_PORT", 00:15:08.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:08.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:08.691 "hdgst": ${hdgst:-false}, 00:15:08.691 "ddgst": ${ddgst:-false} 00:15:08.691 }, 00:15:08.691 "method": "bdev_nvme_attach_controller" 00:15:08.691 } 00:15:08.691 EOF 00:15:08.691 )") 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84514 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@35 -- # sync 00:15:08.691 00:23:55 -- nvmf/common.sh@542 -- # cat 00:15:08.691 00:23:55 -- nvmf/common.sh@542 -- # cat 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:08.691 00:23:55 -- nvmf/common.sh@544 -- # jq . 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:08.691 00:23:55 -- nvmf/common.sh@520 -- # config=() 00:15:08.691 00:23:55 -- nvmf/common.sh@520 -- # local subsystem config 00:15:08.691 00:23:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:08.691 00:23:55 -- nvmf/common.sh@545 -- # IFS=, 00:15:08.691 00:23:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:08.691 { 00:15:08.691 "params": { 00:15:08.691 "name": "Nvme$subsystem", 00:15:08.691 "trtype": "$TEST_TRANSPORT", 00:15:08.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:08.691 "adrfam": "ipv4", 00:15:08.691 "trsvcid": "$NVMF_PORT", 00:15:08.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:08.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:08.691 "hdgst": ${hdgst:-false}, 00:15:08.691 "ddgst": ${ddgst:-false} 00:15:08.691 }, 00:15:08.691 "method": "bdev_nvme_attach_controller" 00:15:08.691 } 00:15:08.691 EOF 00:15:08.691 )") 00:15:08.691 00:23:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:08.691 "params": { 00:15:08.691 "name": "Nvme1", 00:15:08.691 "trtype": "tcp", 00:15:08.691 "traddr": "10.0.0.2", 00:15:08.691 "adrfam": "ipv4", 00:15:08.691 "trsvcid": "4420", 00:15:08.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.691 "hdgst": false, 00:15:08.691 "ddgst": false 00:15:08.691 }, 00:15:08.691 "method": "bdev_nvme_attach_controller" 00:15:08.691 }' 00:15:08.691 00:23:55 -- nvmf/common.sh@544 -- # jq . 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:08.691 00:23:55 -- nvmf/common.sh@520 -- # config=() 00:15:08.691 00:23:55 -- nvmf/common.sh@520 -- # local subsystem config 00:15:08.691 00:23:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:08.691 00:23:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:08.691 { 00:15:08.691 "params": { 00:15:08.691 "name": "Nvme$subsystem", 00:15:08.691 "trtype": "$TEST_TRANSPORT", 00:15:08.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:08.691 "adrfam": "ipv4", 00:15:08.691 "trsvcid": "$NVMF_PORT", 00:15:08.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:08.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:08.691 "hdgst": ${hdgst:-false}, 00:15:08.691 "ddgst": ${ddgst:-false} 00:15:08.691 }, 00:15:08.691 "method": "bdev_nvme_attach_controller" 00:15:08.691 } 00:15:08.691 EOF 00:15:08.691 )") 00:15:08.691 00:23:55 -- nvmf/common.sh@545 -- # IFS=, 00:15:08.691 00:23:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:08.691 "params": { 00:15:08.691 "name": "Nvme1", 00:15:08.691 "trtype": "tcp", 00:15:08.691 "traddr": "10.0.0.2", 00:15:08.691 "adrfam": "ipv4", 00:15:08.691 "trsvcid": "4420", 00:15:08.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.691 "hdgst": false, 00:15:08.691 "ddgst": false 00:15:08.691 }, 00:15:08.691 "method": "bdev_nvme_attach_controller" 00:15:08.691 }' 00:15:08.691 00:23:55 -- nvmf/common.sh@542 -- # cat 00:15:08.691 00:23:55 -- nvmf/common.sh@542 -- # cat 00:15:08.691 00:23:55 -- nvmf/common.sh@544 -- # jq . 00:15:08.691 00:23:55 -- nvmf/common.sh@545 -- # IFS=, 00:15:08.691 00:23:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:08.691 "params": { 00:15:08.691 "name": "Nvme1", 00:15:08.691 "trtype": "tcp", 00:15:08.691 "traddr": "10.0.0.2", 00:15:08.691 "adrfam": "ipv4", 00:15:08.691 "trsvcid": "4420", 00:15:08.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.691 "hdgst": false, 00:15:08.691 "ddgst": false 00:15:08.691 }, 00:15:08.691 "method": "bdev_nvme_attach_controller" 00:15:08.691 }' 00:15:08.691 00:23:55 -- nvmf/common.sh@544 -- # jq . 00:15:08.691 00:23:55 -- nvmf/common.sh@545 -- # IFS=, 00:15:08.691 00:23:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:08.691 "params": { 00:15:08.691 "name": "Nvme1", 00:15:08.691 "trtype": "tcp", 00:15:08.691 "traddr": "10.0.0.2", 00:15:08.691 "adrfam": "ipv4", 00:15:08.691 "trsvcid": "4420", 00:15:08.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.691 "hdgst": false, 00:15:08.691 "ddgst": false 00:15:08.691 }, 00:15:08.691 "method": "bdev_nvme_attach_controller" 00:15:08.691 }' 00:15:08.691 [2024-07-13 00:23:55.874129] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:08.691 [2024-07-13 00:23:55.874430] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:08.691 [2024-07-13 00:23:55.880824] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:08.691 [2024-07-13 00:23:55.880911] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:08.691 00:23:55 -- target/bdev_io_wait.sh@37 -- # wait 84508 00:15:08.691 [2024-07-13 00:23:55.900259] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:08.691 [2024-07-13 00:23:55.900336] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:08.691 [2024-07-13 00:23:55.907693] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:08.692 [2024-07-13 00:23:55.907766] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:08.950 [2024-07-13 00:23:56.093006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.950 [2024-07-13 00:23:56.171208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:08.950 [2024-07-13 00:23:56.172094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.208 [2024-07-13 00:23:56.240940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.208 [2024-07-13 00:23:56.269258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:09.208 [2024-07-13 00:23:56.316245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.208 [2024-07-13 00:23:56.318844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:09.208 Running I/O for 1 seconds... 00:15:09.208 [2024-07-13 00:23:56.391791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:09.467 Running I/O for 1 seconds... 00:15:09.467 Running I/O for 1 seconds... 00:15:09.467 Running I/O for 1 seconds... 00:15:10.403 00:15:10.403 Latency(us) 00:15:10.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.403 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:10.403 Nvme1n1 : 1.01 10552.83 41.22 0.00 0.00 12077.14 7804.74 19303.33 00:15:10.403 =================================================================================================================== 00:15:10.403 Total : 10552.83 41.22 0.00 0.00 12077.14 7804.74 19303.33 00:15:10.403 00:15:10.403 Latency(us) 00:15:10.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.403 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:10.403 Nvme1n1 : 1.00 213489.45 833.94 0.00 0.00 597.52 227.14 997.93 00:15:10.403 =================================================================================================================== 00:15:10.403 Total : 213489.45 833.94 0.00 0.00 597.52 227.14 997.93 00:15:10.403 00:15:10.403 Latency(us) 00:15:10.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.403 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:10.403 Nvme1n1 : 1.01 8307.96 32.45 0.00 0.00 15334.89 7328.12 25261.15 00:15:10.403 =================================================================================================================== 00:15:10.403 Total : 8307.96 32.45 0.00 0.00 15334.89 7328.12 25261.15 00:15:10.403 00:15:10.403 Latency(us) 00:15:10.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.403 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:10.403 Nvme1n1 : 1.01 8536.88 33.35 0.00 0.00 14934.24 7149.38 26333.56 00:15:10.403 =================================================================================================================== 00:15:10.403 Total : 8536.88 33.35 0.00 0.00 14934.24 7149.38 26333.56 00:15:10.661 00:23:57 -- target/bdev_io_wait.sh@38 -- # wait 84510 00:15:10.661 00:23:57 -- target/bdev_io_wait.sh@39 -- # wait 84512 00:15:10.661 00:23:57 -- target/bdev_io_wait.sh@40 -- # wait 84514 00:15:10.921 00:23:57 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.921 00:23:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.921 00:23:57 -- common/autotest_common.sh@10 -- # set +x 00:15:10.921 00:23:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.921 00:23:57 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:10.921 00:23:57 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:10.921 00:23:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:10.921 00:23:57 -- nvmf/common.sh@116 -- # sync 00:15:10.921 00:23:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:10.921 00:23:57 -- nvmf/common.sh@119 -- # set +e 00:15:10.921 00:23:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:10.921 00:23:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:10.921 rmmod nvme_tcp 00:15:10.921 rmmod nvme_fabrics 00:15:10.921 rmmod nvme_keyring 00:15:10.921 00:23:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:10.921 00:23:58 -- nvmf/common.sh@123 -- # set -e 00:15:10.921 00:23:58 -- nvmf/common.sh@124 -- # return 0 00:15:10.921 00:23:58 -- nvmf/common.sh@477 -- # '[' -n 84454 ']' 00:15:10.921 00:23:58 -- nvmf/common.sh@478 -- # killprocess 84454 00:15:10.921 00:23:58 -- common/autotest_common.sh@926 -- # '[' -z 84454 ']' 00:15:10.921 00:23:58 -- common/autotest_common.sh@930 -- # kill -0 84454 00:15:10.921 00:23:58 -- common/autotest_common.sh@931 -- # uname 00:15:10.921 00:23:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:10.921 00:23:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84454 00:15:10.921 00:23:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:10.921 killing process with pid 84454 00:15:10.921 00:23:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:10.921 00:23:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84454' 00:15:10.921 00:23:58 -- common/autotest_common.sh@945 -- # kill 84454 00:15:10.921 00:23:58 -- common/autotest_common.sh@950 -- # wait 84454 00:15:11.180 00:23:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:11.180 00:23:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:11.180 00:23:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:11.180 00:23:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.180 00:23:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:11.180 00:23:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.180 00:23:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.180 00:23:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.180 00:23:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:11.180 00:15:11.180 real 0m4.232s 00:15:11.180 user 0m18.301s 00:15:11.180 sys 0m2.235s 00:15:11.180 00:23:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.180 ************************************ 00:15:11.180 END TEST nvmf_bdev_io_wait 00:15:11.180 ************************************ 00:15:11.180 00:23:58 -- common/autotest_common.sh@10 -- # set +x 00:15:11.180 00:23:58 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:11.180 00:23:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:11.180 00:23:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:11.180 00:23:58 -- common/autotest_common.sh@10 -- # set +x 00:15:11.180 ************************************ 00:15:11.180 START TEST nvmf_queue_depth 00:15:11.180 ************************************ 00:15:11.180 00:23:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:11.439 * Looking for test storage... 00:15:11.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:11.439 00:23:58 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.439 00:23:58 -- nvmf/common.sh@7 -- # uname -s 00:15:11.439 00:23:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.439 00:23:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.439 00:23:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.439 00:23:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.439 00:23:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.439 00:23:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.439 00:23:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.439 00:23:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.439 00:23:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.439 00:23:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.439 00:23:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:15:11.439 00:23:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:15:11.439 00:23:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.439 00:23:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.439 00:23:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.439 00:23:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.439 00:23:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.439 00:23:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.439 00:23:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.439 00:23:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.439 00:23:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.439 00:23:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.440 00:23:58 -- paths/export.sh@5 -- # export PATH 00:15:11.440 00:23:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.440 00:23:58 -- nvmf/common.sh@46 -- # : 0 00:15:11.440 00:23:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:11.440 00:23:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:11.440 00:23:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:11.440 00:23:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.440 00:23:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.440 00:23:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:11.440 00:23:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:11.440 00:23:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:11.440 00:23:58 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:11.440 00:23:58 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:11.440 00:23:58 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:11.440 00:23:58 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:11.440 00:23:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:11.440 00:23:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.440 00:23:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:11.440 00:23:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:11.440 00:23:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:11.440 00:23:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.440 00:23:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.440 00:23:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.440 00:23:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:11.440 00:23:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:11.440 00:23:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:11.440 00:23:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:11.440 00:23:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:11.440 00:23:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:11.440 00:23:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.440 00:23:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.440 00:23:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:11.440 00:23:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:11.440 00:23:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:11.440 00:23:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:11.440 00:23:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:11.440 00:23:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.440 00:23:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:11.440 00:23:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:11.440 00:23:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:11.440 00:23:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:11.440 00:23:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:11.440 00:23:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:11.440 Cannot find device "nvmf_tgt_br" 00:15:11.440 00:23:58 -- nvmf/common.sh@154 -- # true 00:15:11.440 00:23:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.440 Cannot find device "nvmf_tgt_br2" 00:15:11.440 00:23:58 -- nvmf/common.sh@155 -- # true 00:15:11.440 00:23:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:11.440 00:23:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:11.440 Cannot find device "nvmf_tgt_br" 00:15:11.440 00:23:58 -- nvmf/common.sh@157 -- # true 00:15:11.440 00:23:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:11.440 Cannot find device "nvmf_tgt_br2" 00:15:11.440 00:23:58 -- nvmf/common.sh@158 -- # true 00:15:11.440 00:23:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:11.440 00:23:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:11.440 00:23:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.440 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.440 00:23:58 -- nvmf/common.sh@161 -- # true 00:15:11.440 00:23:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.440 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.440 00:23:58 -- nvmf/common.sh@162 -- # true 00:15:11.440 00:23:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:11.440 00:23:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:11.440 00:23:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:11.440 00:23:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:11.440 00:23:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:11.698 00:23:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:11.698 00:23:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:11.698 00:23:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:11.698 00:23:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:11.698 00:23:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:11.698 00:23:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:11.698 00:23:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:11.698 00:23:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:11.698 00:23:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:11.698 00:23:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:11.698 00:23:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:11.698 00:23:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:11.698 00:23:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:11.698 00:23:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:11.698 00:23:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:11.698 00:23:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:11.698 00:23:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:11.698 00:23:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:11.698 00:23:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:11.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:11.698 00:15:11.698 --- 10.0.0.2 ping statistics --- 00:15:11.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.698 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:11.698 00:23:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:11.698 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:11.698 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:11.698 00:15:11.698 --- 10.0.0.3 ping statistics --- 00:15:11.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.698 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:11.698 00:23:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:11.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:11.698 00:15:11.698 --- 10.0.0.1 ping statistics --- 00:15:11.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.698 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:11.698 00:23:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.698 00:23:58 -- nvmf/common.sh@421 -- # return 0 00:15:11.698 00:23:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:11.698 00:23:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.698 00:23:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:11.698 00:23:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:11.698 00:23:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.698 00:23:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:11.698 00:23:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:11.698 00:23:58 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:11.698 00:23:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:11.698 00:23:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:11.698 00:23:58 -- common/autotest_common.sh@10 -- # set +x 00:15:11.698 00:23:58 -- nvmf/common.sh@469 -- # nvmfpid=84743 00:15:11.698 00:23:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:11.698 00:23:58 -- nvmf/common.sh@470 -- # waitforlisten 84743 00:15:11.698 00:23:58 -- common/autotest_common.sh@819 -- # '[' -z 84743 ']' 00:15:11.698 00:23:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.698 00:23:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:11.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.698 00:23:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.698 00:23:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:11.698 00:23:58 -- common/autotest_common.sh@10 -- # set +x 00:15:11.698 [2024-07-13 00:23:58.872378] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:11.698 [2024-07-13 00:23:58.872509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.955 [2024-07-13 00:23:59.013080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.955 [2024-07-13 00:23:59.107399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:11.955 [2024-07-13 00:23:59.107552] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.955 [2024-07-13 00:23:59.107564] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.955 [2024-07-13 00:23:59.107573] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.955 [2024-07-13 00:23:59.107598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.521 00:23:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:12.521 00:23:59 -- common/autotest_common.sh@852 -- # return 0 00:15:12.521 00:23:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:12.521 00:23:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:12.521 00:23:59 -- common/autotest_common.sh@10 -- # set +x 00:15:12.779 00:23:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.779 00:23:59 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.779 00:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.779 00:23:59 -- common/autotest_common.sh@10 -- # set +x 00:15:12.779 [2024-07-13 00:23:59.799290] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.780 00:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.780 00:23:59 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:12.780 00:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.780 00:23:59 -- common/autotest_common.sh@10 -- # set +x 00:15:12.780 Malloc0 00:15:12.780 00:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.780 00:23:59 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:12.780 00:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.780 00:23:59 -- common/autotest_common.sh@10 -- # set +x 00:15:12.780 00:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.780 00:23:59 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.780 00:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.780 00:23:59 -- common/autotest_common.sh@10 -- # set +x 00:15:12.780 00:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.780 00:23:59 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.780 00:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.780 00:23:59 -- common/autotest_common.sh@10 -- # set +x 00:15:12.780 [2024-07-13 00:23:59.860371] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.780 00:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.780 00:23:59 -- target/queue_depth.sh@30 -- # bdevperf_pid=84793 00:15:12.780 00:23:59 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:12.780 00:23:59 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:12.780 00:23:59 -- target/queue_depth.sh@33 -- # waitforlisten 84793 /var/tmp/bdevperf.sock 00:15:12.780 00:23:59 -- common/autotest_common.sh@819 -- # '[' -z 84793 ']' 00:15:12.780 00:23:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.780 00:23:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:12.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.780 00:23:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.780 00:23:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:12.780 00:23:59 -- common/autotest_common.sh@10 -- # set +x 00:15:12.780 [2024-07-13 00:23:59.921495] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:12.780 [2024-07-13 00:23:59.921625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84793 ] 00:15:13.038 [2024-07-13 00:24:00.063064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.038 [2024-07-13 00:24:00.172564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.974 00:24:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:13.974 00:24:00 -- common/autotest_common.sh@852 -- # return 0 00:15:13.974 00:24:00 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:13.974 00:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.974 00:24:00 -- common/autotest_common.sh@10 -- # set +x 00:15:13.974 NVMe0n1 00:15:13.974 00:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.974 00:24:00 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:13.974 Running I/O for 10 seconds... 00:15:24.035 00:15:24.035 Latency(us) 00:15:24.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.035 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:24.035 Verification LBA range: start 0x0 length 0x4000 00:15:24.035 NVMe0n1 : 10.05 15561.58 60.79 0.00 0.00 65588.33 12571.00 60054.81 00:15:24.035 =================================================================================================================== 00:15:24.035 Total : 15561.58 60.79 0.00 0.00 65588.33 12571.00 60054.81 00:15:24.035 0 00:15:24.035 00:24:11 -- target/queue_depth.sh@39 -- # killprocess 84793 00:15:24.035 00:24:11 -- common/autotest_common.sh@926 -- # '[' -z 84793 ']' 00:15:24.035 00:24:11 -- common/autotest_common.sh@930 -- # kill -0 84793 00:15:24.035 00:24:11 -- common/autotest_common.sh@931 -- # uname 00:15:24.035 00:24:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:24.035 00:24:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84793 00:15:24.035 00:24:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:24.035 00:24:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:24.035 killing process with pid 84793 00:15:24.035 00:24:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84793' 00:15:24.035 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.035 00:15:24.035 Latency(us) 00:15:24.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.035 =================================================================================================================== 00:15:24.035 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.035 00:24:11 -- common/autotest_common.sh@945 -- # kill 84793 00:15:24.035 00:24:11 -- common/autotest_common.sh@950 -- # wait 84793 00:15:24.294 00:24:11 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:24.294 00:24:11 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:24.294 00:24:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:24.294 00:24:11 -- nvmf/common.sh@116 -- # sync 00:15:24.294 00:24:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:24.294 00:24:11 -- nvmf/common.sh@119 -- # set +e 00:15:24.294 00:24:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:24.294 00:24:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:24.294 rmmod nvme_tcp 00:15:24.294 rmmod nvme_fabrics 00:15:24.294 rmmod nvme_keyring 00:15:24.294 00:24:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:24.294 00:24:11 -- nvmf/common.sh@123 -- # set -e 00:15:24.294 00:24:11 -- nvmf/common.sh@124 -- # return 0 00:15:24.294 00:24:11 -- nvmf/common.sh@477 -- # '[' -n 84743 ']' 00:15:24.294 00:24:11 -- nvmf/common.sh@478 -- # killprocess 84743 00:15:24.294 00:24:11 -- common/autotest_common.sh@926 -- # '[' -z 84743 ']' 00:15:24.294 00:24:11 -- common/autotest_common.sh@930 -- # kill -0 84743 00:15:24.294 00:24:11 -- common/autotest_common.sh@931 -- # uname 00:15:24.294 00:24:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:24.294 00:24:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84743 00:15:24.553 00:24:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:24.553 00:24:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:24.553 00:24:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84743' 00:15:24.553 killing process with pid 84743 00:15:24.553 00:24:11 -- common/autotest_common.sh@945 -- # kill 84743 00:15:24.553 00:24:11 -- common/autotest_common.sh@950 -- # wait 84743 00:15:24.812 00:24:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:24.812 00:24:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:24.812 00:24:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:24.812 00:24:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:24.812 00:24:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:24.812 00:24:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.812 00:24:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.812 00:24:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.812 00:24:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:24.812 ************************************ 00:15:24.812 END TEST nvmf_queue_depth 00:15:24.812 ************************************ 00:15:24.812 00:15:24.812 real 0m13.478s 00:15:24.812 user 0m22.486s 00:15:24.812 sys 0m2.506s 00:15:24.812 00:24:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.812 00:24:11 -- common/autotest_common.sh@10 -- # set +x 00:15:24.812 00:24:11 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:24.812 00:24:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:24.812 00:24:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:24.812 00:24:11 -- common/autotest_common.sh@10 -- # set +x 00:15:24.812 ************************************ 00:15:24.812 START TEST nvmf_multipath 00:15:24.812 ************************************ 00:15:24.812 00:24:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:24.812 * Looking for test storage... 00:15:24.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:24.812 00:24:12 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:24.812 00:24:12 -- nvmf/common.sh@7 -- # uname -s 00:15:24.812 00:24:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.812 00:24:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.812 00:24:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.812 00:24:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.812 00:24:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.812 00:24:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.812 00:24:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.812 00:24:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.813 00:24:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.813 00:24:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.813 00:24:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:15:24.813 00:24:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:15:24.813 00:24:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.813 00:24:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.813 00:24:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:24.813 00:24:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:24.813 00:24:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.813 00:24:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.813 00:24:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.813 00:24:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.813 00:24:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.813 00:24:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.813 00:24:12 -- paths/export.sh@5 -- # export PATH 00:15:24.813 00:24:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.813 00:24:12 -- nvmf/common.sh@46 -- # : 0 00:15:24.813 00:24:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:24.813 00:24:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:24.813 00:24:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:24.813 00:24:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.813 00:24:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.813 00:24:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:24.813 00:24:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:24.813 00:24:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:24.813 00:24:12 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:24.813 00:24:12 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:24.813 00:24:12 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:24.813 00:24:12 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:24.813 00:24:12 -- target/multipath.sh@43 -- # nvmftestinit 00:15:24.813 00:24:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:24.813 00:24:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.813 00:24:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:24.813 00:24:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:24.813 00:24:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:24.813 00:24:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.813 00:24:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.813 00:24:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.813 00:24:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:24.813 00:24:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:24.813 00:24:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:24.813 00:24:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:24.813 00:24:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:24.813 00:24:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:24.813 00:24:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.072 00:24:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.072 00:24:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.072 00:24:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:25.072 00:24:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.072 00:24:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.072 00:24:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.072 00:24:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.072 00:24:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.072 00:24:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.072 00:24:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.072 00:24:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.072 00:24:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:25.072 00:24:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:25.072 Cannot find device "nvmf_tgt_br" 00:15:25.072 00:24:12 -- nvmf/common.sh@154 -- # true 00:15:25.072 00:24:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.072 Cannot find device "nvmf_tgt_br2" 00:15:25.072 00:24:12 -- nvmf/common.sh@155 -- # true 00:15:25.072 00:24:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:25.072 00:24:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:25.072 Cannot find device "nvmf_tgt_br" 00:15:25.072 00:24:12 -- nvmf/common.sh@157 -- # true 00:15:25.072 00:24:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:25.072 Cannot find device "nvmf_tgt_br2" 00:15:25.072 00:24:12 -- nvmf/common.sh@158 -- # true 00:15:25.072 00:24:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:25.072 00:24:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:25.072 00:24:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.072 00:24:12 -- nvmf/common.sh@161 -- # true 00:15:25.072 00:24:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.072 00:24:12 -- nvmf/common.sh@162 -- # true 00:15:25.072 00:24:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.072 00:24:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.072 00:24:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.072 00:24:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.072 00:24:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.072 00:24:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.072 00:24:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.072 00:24:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:25.072 00:24:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:25.072 00:24:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:25.072 00:24:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:25.072 00:24:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:25.072 00:24:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:25.072 00:24:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.072 00:24:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.072 00:24:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.072 00:24:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:25.072 00:24:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:25.072 00:24:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.331 00:24:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.331 00:24:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.331 00:24:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.331 00:24:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.331 00:24:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:25.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:15:25.331 00:15:25.331 --- 10.0.0.2 ping statistics --- 00:15:25.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.331 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:15:25.331 00:24:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:25.331 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.331 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:25.331 00:15:25.331 --- 10.0.0.3 ping statistics --- 00:15:25.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.331 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:25.331 00:24:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:15:25.331 00:15:25.331 --- 10.0.0.1 ping statistics --- 00:15:25.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.331 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:25.331 00:24:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.331 00:24:12 -- nvmf/common.sh@421 -- # return 0 00:15:25.331 00:24:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:25.331 00:24:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.331 00:24:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:25.331 00:24:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:25.331 00:24:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.331 00:24:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:25.331 00:24:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:25.331 00:24:12 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:25.331 00:24:12 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:25.331 00:24:12 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:25.331 00:24:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:25.331 00:24:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:25.331 00:24:12 -- common/autotest_common.sh@10 -- # set +x 00:15:25.331 00:24:12 -- nvmf/common.sh@469 -- # nvmfpid=85125 00:15:25.331 00:24:12 -- nvmf/common.sh@470 -- # waitforlisten 85125 00:15:25.331 00:24:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:25.331 00:24:12 -- common/autotest_common.sh@819 -- # '[' -z 85125 ']' 00:15:25.331 00:24:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.331 00:24:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:25.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.331 00:24:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.331 00:24:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:25.331 00:24:12 -- common/autotest_common.sh@10 -- # set +x 00:15:25.331 [2024-07-13 00:24:12.438000] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:25.331 [2024-07-13 00:24:12.438120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.590 [2024-07-13 00:24:12.583577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.590 [2024-07-13 00:24:12.684688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:25.590 [2024-07-13 00:24:12.684863] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.590 [2024-07-13 00:24:12.684888] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.590 [2024-07-13 00:24:12.684899] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.590 [2024-07-13 00:24:12.685097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.590 [2024-07-13 00:24:12.685247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.590 [2024-07-13 00:24:12.685382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.590 [2024-07-13 00:24:12.685382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.158 00:24:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:26.158 00:24:13 -- common/autotest_common.sh@852 -- # return 0 00:15:26.158 00:24:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:26.417 00:24:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:26.417 00:24:13 -- common/autotest_common.sh@10 -- # set +x 00:15:26.417 00:24:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.417 00:24:13 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:26.417 [2024-07-13 00:24:13.629950] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.676 00:24:13 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:26.934 Malloc0 00:15:26.934 00:24:13 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:27.191 00:24:14 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:27.448 00:24:14 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.448 [2024-07-13 00:24:14.646487] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.448 00:24:14 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:27.706 [2024-07-13 00:24:14.894891] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:27.706 00:24:14 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:27.963 00:24:15 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:28.222 00:24:15 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:28.222 00:24:15 -- common/autotest_common.sh@1177 -- # local i=0 00:15:28.222 00:24:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:28.222 00:24:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:28.222 00:24:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:30.154 00:24:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:30.154 00:24:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:30.154 00:24:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:30.414 00:24:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:30.414 00:24:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:30.414 00:24:17 -- common/autotest_common.sh@1187 -- # return 0 00:15:30.414 00:24:17 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:30.414 00:24:17 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:30.414 00:24:17 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:30.414 00:24:17 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:30.414 00:24:17 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:30.414 00:24:17 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:30.414 00:24:17 -- target/multipath.sh@38 -- # return 0 00:15:30.414 00:24:17 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:30.414 00:24:17 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:30.414 00:24:17 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:30.414 00:24:17 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:30.414 00:24:17 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:30.414 00:24:17 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:30.414 00:24:17 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:30.414 00:24:17 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:30.414 00:24:17 -- target/multipath.sh@22 -- # local timeout=20 00:15:30.414 00:24:17 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:30.414 00:24:17 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:30.414 00:24:17 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:30.414 00:24:17 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:30.414 00:24:17 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:30.414 00:24:17 -- target/multipath.sh@22 -- # local timeout=20 00:15:30.414 00:24:17 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:30.414 00:24:17 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:30.414 00:24:17 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:30.414 00:24:17 -- target/multipath.sh@85 -- # echo numa 00:15:30.414 00:24:17 -- target/multipath.sh@88 -- # fio_pid=85268 00:15:30.414 00:24:17 -- target/multipath.sh@90 -- # sleep 1 00:15:30.414 00:24:17 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:30.414 [global] 00:15:30.414 thread=1 00:15:30.414 invalidate=1 00:15:30.414 rw=randrw 00:15:30.414 time_based=1 00:15:30.414 runtime=6 00:15:30.414 ioengine=libaio 00:15:30.414 direct=1 00:15:30.414 bs=4096 00:15:30.414 iodepth=128 00:15:30.414 norandommap=0 00:15:30.414 numjobs=1 00:15:30.414 00:15:30.414 verify_dump=1 00:15:30.414 verify_backlog=512 00:15:30.414 verify_state_save=0 00:15:30.414 do_verify=1 00:15:30.414 verify=crc32c-intel 00:15:30.414 [job0] 00:15:30.414 filename=/dev/nvme0n1 00:15:30.414 Could not set queue depth (nvme0n1) 00:15:30.414 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.414 fio-3.35 00:15:30.414 Starting 1 thread 00:15:31.350 00:24:18 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:31.609 00:24:18 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:31.867 00:24:18 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:31.867 00:24:18 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:31.867 00:24:18 -- target/multipath.sh@22 -- # local timeout=20 00:15:31.867 00:24:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:31.867 00:24:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:31.867 00:24:18 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:31.867 00:24:18 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:31.867 00:24:18 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:31.867 00:24:18 -- target/multipath.sh@22 -- # local timeout=20 00:15:31.867 00:24:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:31.867 00:24:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:31.867 00:24:18 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:31.867 00:24:18 -- target/multipath.sh@25 -- # sleep 1s 00:15:32.802 00:24:19 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:32.802 00:24:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:32.802 00:24:19 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:32.802 00:24:19 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:33.059 00:24:20 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:33.317 00:24:20 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:33.317 00:24:20 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:33.317 00:24:20 -- target/multipath.sh@22 -- # local timeout=20 00:15:33.317 00:24:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:33.317 00:24:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:33.317 00:24:20 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:33.317 00:24:20 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:33.317 00:24:20 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:33.317 00:24:20 -- target/multipath.sh@22 -- # local timeout=20 00:15:33.317 00:24:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:33.317 00:24:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:33.317 00:24:20 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:33.317 00:24:20 -- target/multipath.sh@25 -- # sleep 1s 00:15:34.692 00:24:21 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:34.692 00:24:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:34.692 00:24:21 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:34.692 00:24:21 -- target/multipath.sh@104 -- # wait 85268 00:15:36.591 00:15:36.591 job0: (groupid=0, jobs=1): err= 0: pid=85289: Sat Jul 13 00:24:23 2024 00:15:36.591 read: IOPS=11.2k, BW=43.7MiB/s (45.8MB/s)(262MiB/6006msec) 00:15:36.591 slat (usec): min=6, max=5448, avg=50.46, stdev=223.78 00:15:36.591 clat (usec): min=2622, max=15700, avg=7744.19, stdev=1233.24 00:15:36.591 lat (usec): min=2631, max=15716, avg=7794.65, stdev=1242.60 00:15:36.591 clat percentiles (usec): 00:15:36.591 | 1.00th=[ 4817], 5.00th=[ 5997], 10.00th=[ 6456], 20.00th=[ 6915], 00:15:36.591 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 7963], 00:15:36.591 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9896], 00:15:36.591 | 99.00th=[11469], 99.50th=[11863], 99.90th=[13435], 99.95th=[14091], 00:15:36.591 | 99.99th=[15401] 00:15:36.591 bw ( KiB/s): min=14216, max=28624, per=52.52%, avg=23497.45, stdev=3899.53, samples=11 00:15:36.591 iops : min= 3554, max= 7156, avg=5874.36, stdev=974.88, samples=11 00:15:36.591 write: IOPS=6574, BW=25.7MiB/s (26.9MB/s)(142MiB/5529msec); 0 zone resets 00:15:36.591 slat (usec): min=14, max=2275, avg=62.73, stdev=157.37 00:15:36.591 clat (usec): min=693, max=16236, avg=6644.20, stdev=1013.60 00:15:36.591 lat (usec): min=744, max=16260, avg=6706.93, stdev=1017.31 00:15:36.591 clat percentiles (usec): 00:15:36.591 | 1.00th=[ 3720], 5.00th=[ 4883], 10.00th=[ 5604], 20.00th=[ 6063], 00:15:36.591 | 30.00th=[ 6259], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 6849], 00:15:36.591 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7570], 95.00th=[ 7963], 00:15:36.591 | 99.00th=[ 9765], 99.50th=[10421], 99.90th=[12125], 99.95th=[12649], 00:15:36.591 | 99.99th=[13304] 00:15:36.591 bw ( KiB/s): min=14960, max=28136, per=89.42%, avg=23514.18, stdev=3748.82, samples=11 00:15:36.591 iops : min= 3740, max= 7034, avg=5878.55, stdev=937.20, samples=11 00:15:36.591 lat (usec) : 750=0.01% 00:15:36.591 lat (msec) : 2=0.01%, 4=0.81%, 10=95.93%, 20=3.26% 00:15:36.591 cpu : usr=5.45%, sys=23.16%, ctx=6203, majf=0, minf=108 00:15:36.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:36.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:36.592 issued rwts: total=67176,36349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:36.592 00:15:36.592 Run status group 0 (all jobs): 00:15:36.592 READ: bw=43.7MiB/s (45.8MB/s), 43.7MiB/s-43.7MiB/s (45.8MB/s-45.8MB/s), io=262MiB (275MB), run=6006-6006msec 00:15:36.592 WRITE: bw=25.7MiB/s (26.9MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=142MiB (149MB), run=5529-5529msec 00:15:36.592 00:15:36.592 Disk stats (read/write): 00:15:36.592 nvme0n1: ios=66149/35555, merge=0/0, ticks=479241/221281, in_queue=700522, util=98.63% 00:15:36.592 00:24:23 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:36.849 00:24:23 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:37.107 00:24:24 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:37.107 00:24:24 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:37.107 00:24:24 -- target/multipath.sh@22 -- # local timeout=20 00:15:37.107 00:24:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:37.107 00:24:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:37.107 00:24:24 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:37.107 00:24:24 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:37.107 00:24:24 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:37.107 00:24:24 -- target/multipath.sh@22 -- # local timeout=20 00:15:37.107 00:24:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:37.107 00:24:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:37.107 00:24:24 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:37.107 00:24:24 -- target/multipath.sh@25 -- # sleep 1s 00:15:38.041 00:24:25 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:38.041 00:24:25 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:38.041 00:24:25 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:38.041 00:24:25 -- target/multipath.sh@113 -- # echo round-robin 00:15:38.041 00:24:25 -- target/multipath.sh@116 -- # fio_pid=85412 00:15:38.042 00:24:25 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:38.042 00:24:25 -- target/multipath.sh@118 -- # sleep 1 00:15:38.042 [global] 00:15:38.042 thread=1 00:15:38.042 invalidate=1 00:15:38.042 rw=randrw 00:15:38.042 time_based=1 00:15:38.042 runtime=6 00:15:38.042 ioengine=libaio 00:15:38.042 direct=1 00:15:38.042 bs=4096 00:15:38.042 iodepth=128 00:15:38.042 norandommap=0 00:15:38.042 numjobs=1 00:15:38.042 00:15:38.300 verify_dump=1 00:15:38.300 verify_backlog=512 00:15:38.300 verify_state_save=0 00:15:38.300 do_verify=1 00:15:38.300 verify=crc32c-intel 00:15:38.300 [job0] 00:15:38.300 filename=/dev/nvme0n1 00:15:38.300 Could not set queue depth (nvme0n1) 00:15:38.300 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:38.300 fio-3.35 00:15:38.300 Starting 1 thread 00:15:39.235 00:24:26 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:39.494 00:24:26 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:39.753 00:24:26 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:39.753 00:24:26 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:39.753 00:24:26 -- target/multipath.sh@22 -- # local timeout=20 00:15:39.753 00:24:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:39.753 00:24:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:39.753 00:24:26 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:39.753 00:24:26 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:39.753 00:24:26 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:39.753 00:24:26 -- target/multipath.sh@22 -- # local timeout=20 00:15:39.753 00:24:26 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:39.753 00:24:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:39.753 00:24:26 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:39.753 00:24:26 -- target/multipath.sh@25 -- # sleep 1s 00:15:40.690 00:24:27 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:40.690 00:24:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:40.690 00:24:27 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:40.690 00:24:27 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:40.949 00:24:28 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:41.209 00:24:28 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:41.209 00:24:28 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:41.209 00:24:28 -- target/multipath.sh@22 -- # local timeout=20 00:15:41.209 00:24:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:41.209 00:24:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:41.209 00:24:28 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:41.209 00:24:28 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:41.209 00:24:28 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:41.209 00:24:28 -- target/multipath.sh@22 -- # local timeout=20 00:15:41.209 00:24:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:41.209 00:24:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:41.209 00:24:28 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:41.209 00:24:28 -- target/multipath.sh@25 -- # sleep 1s 00:15:42.147 00:24:29 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:42.147 00:24:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:42.147 00:24:29 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:42.147 00:24:29 -- target/multipath.sh@132 -- # wait 85412 00:15:44.682 00:15:44.682 job0: (groupid=0, jobs=1): err= 0: pid=85434: Sat Jul 13 00:24:31 2024 00:15:44.682 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(273MiB/6005msec) 00:15:44.682 slat (usec): min=5, max=5662, avg=43.58, stdev=204.44 00:15:44.682 clat (usec): min=309, max=18016, avg=7525.83, stdev=1958.95 00:15:44.682 lat (usec): min=318, max=18025, avg=7569.40, stdev=1963.84 00:15:44.682 clat percentiles (usec): 00:15:44.682 | 1.00th=[ 2474], 5.00th=[ 3916], 10.00th=[ 5538], 20.00th=[ 6456], 00:15:44.682 | 30.00th=[ 6783], 40.00th=[ 7111], 50.00th=[ 7373], 60.00th=[ 7767], 00:15:44.682 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9765], 95.00th=[11207], 00:15:44.682 | 99.00th=[13435], 99.50th=[14222], 99.90th=[15533], 99.95th=[16581], 00:15:44.682 | 99.99th=[17695] 00:15:44.682 bw ( KiB/s): min=11560, max=33872, per=52.72%, avg=24528.82, stdev=7158.20, samples=11 00:15:44.682 iops : min= 2890, max= 8468, avg=6132.18, stdev=1789.53, samples=11 00:15:44.682 write: IOPS=6985, BW=27.3MiB/s (28.6MB/s)(145MiB/5301msec); 0 zone resets 00:15:44.682 slat (usec): min=10, max=2570, avg=51.41, stdev=134.79 00:15:44.682 clat (usec): min=559, max=14496, avg=6395.39, stdev=1900.14 00:15:44.682 lat (usec): min=597, max=14528, avg=6446.80, stdev=1903.68 00:15:44.682 clat percentiles (usec): 00:15:44.682 | 1.00th=[ 1663], 5.00th=[ 2540], 10.00th=[ 3458], 20.00th=[ 5473], 00:15:44.682 | 30.00th=[ 6063], 40.00th=[ 6325], 50.00th=[ 6587], 60.00th=[ 6849], 00:15:44.682 | 70.00th=[ 7046], 80.00th=[ 7373], 90.00th=[ 8455], 95.00th=[ 9634], 00:15:44.682 | 99.00th=[11207], 99.50th=[11731], 99.90th=[12649], 99.95th=[13566], 00:15:44.682 | 99.99th=[14091] 00:15:44.682 bw ( KiB/s): min=12168, max=33416, per=87.85%, avg=24549.00, stdev=6935.30, samples=11 00:15:44.682 iops : min= 3042, max= 8354, avg=6137.18, stdev=1733.74, samples=11 00:15:44.682 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.04% 00:15:44.682 lat (msec) : 2=0.96%, 4=6.85%, 10=84.95%, 20=7.17% 00:15:44.682 cpu : usr=5.51%, sys=22.20%, ctx=6768, majf=0, minf=169 00:15:44.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:44.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:44.682 issued rwts: total=69849,37031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.682 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:44.682 00:15:44.682 Run status group 0 (all jobs): 00:15:44.682 READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=273MiB (286MB), run=6005-6005msec 00:15:44.682 WRITE: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=145MiB (152MB), run=5301-5301msec 00:15:44.682 00:15:44.682 Disk stats (read/write): 00:15:44.682 nvme0n1: ios=68814/36356, merge=0/0, ticks=487486/218144, in_queue=705630, util=98.71% 00:15:44.682 00:24:31 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:44.682 00:24:31 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:44.682 00:24:31 -- common/autotest_common.sh@1198 -- # local i=0 00:15:44.682 00:24:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:44.682 00:24:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.682 00:24:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:44.682 00:24:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.682 00:24:31 -- common/autotest_common.sh@1210 -- # return 0 00:15:44.682 00:24:31 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.942 00:24:31 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:44.942 00:24:31 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:44.942 00:24:31 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:44.942 00:24:31 -- target/multipath.sh@144 -- # nvmftestfini 00:15:44.942 00:24:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:44.942 00:24:31 -- nvmf/common.sh@116 -- # sync 00:15:44.942 00:24:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:44.942 00:24:31 -- nvmf/common.sh@119 -- # set +e 00:15:44.942 00:24:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:44.942 00:24:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:44.942 rmmod nvme_tcp 00:15:44.942 rmmod nvme_fabrics 00:15:44.942 rmmod nvme_keyring 00:15:44.942 00:24:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:44.942 00:24:32 -- nvmf/common.sh@123 -- # set -e 00:15:44.942 00:24:32 -- nvmf/common.sh@124 -- # return 0 00:15:44.942 00:24:32 -- nvmf/common.sh@477 -- # '[' -n 85125 ']' 00:15:44.942 00:24:32 -- nvmf/common.sh@478 -- # killprocess 85125 00:15:44.942 00:24:32 -- common/autotest_common.sh@926 -- # '[' -z 85125 ']' 00:15:44.942 00:24:32 -- common/autotest_common.sh@930 -- # kill -0 85125 00:15:44.942 00:24:32 -- common/autotest_common.sh@931 -- # uname 00:15:44.942 00:24:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:44.942 00:24:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85125 00:15:44.942 killing process with pid 85125 00:15:44.942 00:24:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:44.942 00:24:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:44.942 00:24:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85125' 00:15:44.942 00:24:32 -- common/autotest_common.sh@945 -- # kill 85125 00:15:44.942 00:24:32 -- common/autotest_common.sh@950 -- # wait 85125 00:15:45.202 00:24:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:45.202 00:24:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:45.202 00:24:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:45.202 00:24:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.202 00:24:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:45.202 00:24:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.202 00:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.202 00:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.203 00:24:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:45.203 ************************************ 00:15:45.203 END TEST nvmf_multipath 00:15:45.203 ************************************ 00:15:45.203 00:15:45.203 real 0m20.397s 00:15:45.203 user 1m20.131s 00:15:45.203 sys 0m6.291s 00:15:45.203 00:24:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.203 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:15:45.203 00:24:32 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:45.203 00:24:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:45.203 00:24:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:45.203 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:15:45.203 ************************************ 00:15:45.203 START TEST nvmf_zcopy 00:15:45.203 ************************************ 00:15:45.203 00:24:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:45.500 * Looking for test storage... 00:15:45.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:45.500 00:24:32 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.500 00:24:32 -- nvmf/common.sh@7 -- # uname -s 00:15:45.500 00:24:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.500 00:24:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.500 00:24:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.500 00:24:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.500 00:24:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.500 00:24:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.500 00:24:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.500 00:24:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.500 00:24:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.500 00:24:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.500 00:24:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:15:45.500 00:24:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:15:45.500 00:24:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.500 00:24:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.500 00:24:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.500 00:24:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.500 00:24:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.500 00:24:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.500 00:24:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.500 00:24:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.500 00:24:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.500 00:24:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.500 00:24:32 -- paths/export.sh@5 -- # export PATH 00:15:45.500 00:24:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.500 00:24:32 -- nvmf/common.sh@46 -- # : 0 00:15:45.500 00:24:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:45.500 00:24:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:45.500 00:24:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:45.500 00:24:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.500 00:24:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.500 00:24:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:45.500 00:24:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:45.500 00:24:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:45.500 00:24:32 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:45.500 00:24:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:45.500 00:24:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.500 00:24:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:45.500 00:24:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:45.500 00:24:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:45.500 00:24:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.500 00:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.500 00:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.500 00:24:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:45.500 00:24:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:45.500 00:24:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:45.500 00:24:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:45.500 00:24:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:45.500 00:24:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:45.500 00:24:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.500 00:24:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.500 00:24:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:45.500 00:24:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:45.500 00:24:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.500 00:24:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.500 00:24:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.500 00:24:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.500 00:24:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.500 00:24:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.500 00:24:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.500 00:24:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.500 00:24:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:45.500 00:24:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:45.500 Cannot find device "nvmf_tgt_br" 00:15:45.500 00:24:32 -- nvmf/common.sh@154 -- # true 00:15:45.500 00:24:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.500 Cannot find device "nvmf_tgt_br2" 00:15:45.500 00:24:32 -- nvmf/common.sh@155 -- # true 00:15:45.500 00:24:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:45.500 00:24:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:45.500 Cannot find device "nvmf_tgt_br" 00:15:45.500 00:24:32 -- nvmf/common.sh@157 -- # true 00:15:45.500 00:24:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:45.500 Cannot find device "nvmf_tgt_br2" 00:15:45.500 00:24:32 -- nvmf/common.sh@158 -- # true 00:15:45.500 00:24:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:45.500 00:24:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:45.500 00:24:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.500 00:24:32 -- nvmf/common.sh@161 -- # true 00:15:45.500 00:24:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.500 00:24:32 -- nvmf/common.sh@162 -- # true 00:15:45.500 00:24:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:45.500 00:24:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:45.500 00:24:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:45.500 00:24:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:45.500 00:24:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:45.500 00:24:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:45.500 00:24:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:45.500 00:24:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:45.500 00:24:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:45.763 00:24:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:45.763 00:24:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:45.763 00:24:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:45.763 00:24:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:45.763 00:24:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:45.763 00:24:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:45.763 00:24:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:45.763 00:24:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:45.763 00:24:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:45.763 00:24:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:45.763 00:24:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:45.763 00:24:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:45.763 00:24:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:45.763 00:24:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:45.763 00:24:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:45.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:45.764 00:15:45.764 --- 10.0.0.2 ping statistics --- 00:15:45.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.764 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:45.764 00:24:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:45.764 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:45.764 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:45.764 00:15:45.764 --- 10.0.0.3 ping statistics --- 00:15:45.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.764 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:45.764 00:24:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:45.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:45.764 00:15:45.764 --- 10.0.0.1 ping statistics --- 00:15:45.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.764 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:45.764 00:24:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.764 00:24:32 -- nvmf/common.sh@421 -- # return 0 00:15:45.764 00:24:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:45.764 00:24:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.764 00:24:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:45.764 00:24:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:45.764 00:24:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.764 00:24:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:45.764 00:24:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:45.764 00:24:32 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:45.764 00:24:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:45.764 00:24:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:45.764 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:15:45.764 00:24:32 -- nvmf/common.sh@469 -- # nvmfpid=85713 00:15:45.764 00:24:32 -- nvmf/common.sh@470 -- # waitforlisten 85713 00:15:45.764 00:24:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:45.764 00:24:32 -- common/autotest_common.sh@819 -- # '[' -z 85713 ']' 00:15:45.764 00:24:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.764 00:24:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:45.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.764 00:24:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.764 00:24:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:45.764 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:15:45.764 [2024-07-13 00:24:32.922018] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:45.764 [2024-07-13 00:24:32.922118] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.021 [2024-07-13 00:24:33.063123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.021 [2024-07-13 00:24:33.157060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:46.021 [2024-07-13 00:24:33.157248] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.021 [2024-07-13 00:24:33.157266] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.021 [2024-07-13 00:24:33.157277] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.021 [2024-07-13 00:24:33.157308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.954 00:24:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:46.955 00:24:33 -- common/autotest_common.sh@852 -- # return 0 00:15:46.955 00:24:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:46.955 00:24:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:46.955 00:24:33 -- common/autotest_common.sh@10 -- # set +x 00:15:46.955 00:24:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.955 00:24:33 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:46.955 00:24:33 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:46.955 00:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.955 00:24:33 -- common/autotest_common.sh@10 -- # set +x 00:15:46.955 [2024-07-13 00:24:33.919939] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.955 00:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.955 00:24:33 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:46.955 00:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.955 00:24:33 -- common/autotest_common.sh@10 -- # set +x 00:15:46.955 00:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.955 00:24:33 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.955 00:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.955 00:24:33 -- common/autotest_common.sh@10 -- # set +x 00:15:46.955 [2024-07-13 00:24:33.940064] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.955 00:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.955 00:24:33 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:46.955 00:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.955 00:24:33 -- common/autotest_common.sh@10 -- # set +x 00:15:46.955 00:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.955 00:24:33 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:46.955 00:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.955 00:24:33 -- common/autotest_common.sh@10 -- # set +x 00:15:46.955 malloc0 00:15:46.955 00:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.955 00:24:33 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:46.955 00:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.955 00:24:33 -- common/autotest_common.sh@10 -- # set +x 00:15:46.955 00:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.955 00:24:33 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:46.955 00:24:33 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:46.955 00:24:33 -- nvmf/common.sh@520 -- # config=() 00:15:46.955 00:24:33 -- nvmf/common.sh@520 -- # local subsystem config 00:15:46.955 00:24:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:46.955 00:24:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:46.955 { 00:15:46.955 "params": { 00:15:46.955 "name": "Nvme$subsystem", 00:15:46.955 "trtype": "$TEST_TRANSPORT", 00:15:46.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:46.955 "adrfam": "ipv4", 00:15:46.955 "trsvcid": "$NVMF_PORT", 00:15:46.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:46.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:46.955 "hdgst": ${hdgst:-false}, 00:15:46.955 "ddgst": ${ddgst:-false} 00:15:46.955 }, 00:15:46.955 "method": "bdev_nvme_attach_controller" 00:15:46.955 } 00:15:46.955 EOF 00:15:46.955 )") 00:15:46.955 00:24:33 -- nvmf/common.sh@542 -- # cat 00:15:46.955 00:24:33 -- nvmf/common.sh@544 -- # jq . 00:15:46.955 00:24:33 -- nvmf/common.sh@545 -- # IFS=, 00:15:46.955 00:24:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:46.955 "params": { 00:15:46.955 "name": "Nvme1", 00:15:46.955 "trtype": "tcp", 00:15:46.955 "traddr": "10.0.0.2", 00:15:46.955 "adrfam": "ipv4", 00:15:46.955 "trsvcid": "4420", 00:15:46.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:46.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:46.955 "hdgst": false, 00:15:46.955 "ddgst": false 00:15:46.955 }, 00:15:46.955 "method": "bdev_nvme_attach_controller" 00:15:46.955 }' 00:15:46.955 [2024-07-13 00:24:34.035536] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:46.955 [2024-07-13 00:24:34.035662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85764 ] 00:15:46.955 [2024-07-13 00:24:34.179553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.213 [2024-07-13 00:24:34.267218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.213 Running I/O for 10 seconds... 00:15:59.423 00:15:59.423 Latency(us) 00:15:59.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.423 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:59.423 Verification LBA range: start 0x0 length 0x1000 00:15:59.423 Nvme1n1 : 10.01 10125.68 79.11 0.00 0.00 12609.26 677.70 20971.52 00:15:59.423 =================================================================================================================== 00:15:59.423 Total : 10125.68 79.11 0.00 0.00 12609.26 677.70 20971.52 00:15:59.423 00:24:44 -- target/zcopy.sh@39 -- # perfpid=85879 00:15:59.423 00:24:44 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:59.423 00:24:44 -- common/autotest_common.sh@10 -- # set +x 00:15:59.423 00:24:44 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:59.423 00:24:44 -- nvmf/common.sh@520 -- # config=() 00:15:59.423 00:24:44 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:59.423 00:24:44 -- nvmf/common.sh@520 -- # local subsystem config 00:15:59.423 00:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:59.423 00:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:59.423 { 00:15:59.423 "params": { 00:15:59.423 "name": "Nvme$subsystem", 00:15:59.423 "trtype": "$TEST_TRANSPORT", 00:15:59.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.423 "adrfam": "ipv4", 00:15:59.423 "trsvcid": "$NVMF_PORT", 00:15:59.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.423 "hdgst": ${hdgst:-false}, 00:15:59.423 "ddgst": ${ddgst:-false} 00:15:59.423 }, 00:15:59.423 "method": "bdev_nvme_attach_controller" 00:15:59.423 } 00:15:59.423 EOF 00:15:59.423 )") 00:15:59.423 00:24:44 -- nvmf/common.sh@542 -- # cat 00:15:59.423 [2024-07-13 00:24:44.659925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.659971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 00:24:44 -- nvmf/common.sh@544 -- # jq . 00:15:59.423 00:24:44 -- nvmf/common.sh@545 -- # IFS=, 00:15:59.423 00:24:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:59.423 "params": { 00:15:59.423 "name": "Nvme1", 00:15:59.423 "trtype": "tcp", 00:15:59.423 "traddr": "10.0.0.2", 00:15:59.423 "adrfam": "ipv4", 00:15:59.423 "trsvcid": "4420", 00:15:59.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.423 "hdgst": false, 00:15:59.423 "ddgst": false 00:15:59.423 }, 00:15:59.423 "method": "bdev_nvme_attach_controller" 00:15:59.423 }' 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.671892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.671918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.683896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.683920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.691882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.691905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.703904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.703928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.709323] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:59.423 [2024-07-13 00:24:44.709398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85879 ] 00:15:59.423 [2024-07-13 00:24:44.715904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.715932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.727911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.727938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.739909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.739936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.751911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.751937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.763904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.763932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.775903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.775929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.787919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.787946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.799914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.799945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.811914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.811943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.823908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.823934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.835910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.835936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.843911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.843952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 [2024-07-13 00:24:44.847914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.851936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.851964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.859926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.423 [2024-07-13 00:24:44.860119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.423 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.423 [2024-07-13 00:24:44.867935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.868120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.875947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.876115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.883932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.884114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.891935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.891961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.899940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.900146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.907943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.907973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.915941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.915969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.923939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.923982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.931940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.932139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.938041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.424 [2024-07-13 00:24:44.939962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.940108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.947962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.948109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.955971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.956134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.963957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.963989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.971974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.972005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.979959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.980003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.987975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.988145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:44.995979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:44.996168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.003980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.004127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.011985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.012158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.019973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.020138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.027988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.028144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.035984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.036013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.044001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.044033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.051998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.052167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.060001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.060163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.072025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.072194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.080012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.080171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.087990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.088175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.096011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.424 [2024-07-13 00:24:45.096167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.424 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.424 [2024-07-13 00:24:45.104028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.104198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 Running I/O for 5 seconds... 00:15:59.425 [2024-07-13 00:24:45.112011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.112171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.124303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.124338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.133451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.133486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.146506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.146541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.158430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.158464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.167532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.167567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.179303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.179337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.188597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.188657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.200430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.200652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.212162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.212337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.220455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.220644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.235617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.235828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.253411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.253447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.267467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.267503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.284198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.284233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.300716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.300897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.316835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.317007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.334503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.334690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.344263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.344419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.358388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.358564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.367227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.367381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.378091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.378247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.393466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.393563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.404375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.404407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.418958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.419007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.427797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.427829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.444541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.444575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.460838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.460871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.425 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.425 [2024-07-13 00:24:45.472079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.425 [2024-07-13 00:24:45.472343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.480619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.480814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.492713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.492891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.503554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.503739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.511267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.511427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.523625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.523799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.535470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.535658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.552021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.552054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.567667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.567699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.576950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.576983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.593020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.593199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.610307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.610471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.621038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.621202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.637109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.637273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.648416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.648455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.663599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.663645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.681500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.681540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.697092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.697382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.714237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.714440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.725709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.725900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.735231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.735400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.749498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.749705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.758906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.759081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.773817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.773852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.783484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.783517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.797059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.797265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.805885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.806050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.819945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.820105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.829667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.829833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.426 [2024-07-13 00:24:45.844154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.426 [2024-07-13 00:24:45.844319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.426 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.854963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.855151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.871535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.871735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.881528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.881700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.894811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.894967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.903282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.903451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.914254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.914424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.927081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.927114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.944244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.944276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.960232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.960264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.977243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.977275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.986424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.986592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:45.995954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:45.996119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.005323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.005498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.014953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.015115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.024513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.024675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.034387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.034549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.043737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.043900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.057982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.058151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.066677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.066708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.080825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.080857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.089327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.089358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.098659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.098820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.107830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.107988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.427 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.427 [2024-07-13 00:24:46.117288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.427 [2024-07-13 00:24:46.117445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.126459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.126622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.135953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.136109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.145690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.145744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.158933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.158965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.167936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.167968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.181576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.181761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.189896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.190052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.201227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.201386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.210286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.210439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.224839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.225024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.235908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.236068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.244192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.244226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.260101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.260134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.278255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.278422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.293504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.293701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.304483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.304645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.320549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.320738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.331940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.332115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.340791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.340968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.352002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.352035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.362406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.362437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.370600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.370661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.381699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.381730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.396057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.396223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.405363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.405522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.419220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.419376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.427930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.428093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.441637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.441796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.428 [2024-07-13 00:24:46.450647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.428 [2024-07-13 00:24:46.450678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.428 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.459736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.459766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.469029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.469061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.478285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.478447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.487824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.487986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.497288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.497451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.506741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.506902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.520040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.520200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.529042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.529200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.538753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.538784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.548333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.548365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.557907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.558083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.567736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.567898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.577987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.578170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.589508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.589699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.597796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.597956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.610430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.610587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.621514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.621712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.429 [2024-07-13 00:24:46.638143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.429 [2024-07-13 00:24:46.638326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.429 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.650373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.650535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.667865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.668038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.683506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.683720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.701279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.701455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.716707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.716901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.728693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.728735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.736953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.736984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.749050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.749081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.760116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.760146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.768535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.768569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.778514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.778544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.787803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.787832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.797040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.797070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.806387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.806417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.815609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.815649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.824343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.824372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.838349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.838380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.846590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.846640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.859089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.859119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.868491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.868528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.877811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.877841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.886979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.887008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.689 [2024-07-13 00:24:46.896440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.689 [2024-07-13 00:24:46.896496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.689 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.690 [2024-07-13 00:24:46.905779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.690 [2024-07-13 00:24:46.905808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.690 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.690 [2024-07-13 00:24:46.915313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.690 [2024-07-13 00:24:46.915343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.690 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:46.925824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:46.925855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:46.936243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:46.936275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:46.946076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:46.946107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:46.955985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:46.956015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:46.966091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:46.966122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:46.975593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:46.975641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:46.985000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:46.985030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:46.994614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:46.994656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.004290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.004320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.013802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.013832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.023152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.023182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.032524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.032555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.042238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.042269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.051708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.051738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.061131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.061161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.070399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.070429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.080293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.080327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.089965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.089994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.100254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.100295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.112375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.112405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.121014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.121044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.132007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.132036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.142585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.142625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.949 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.949 [2024-07-13 00:24:47.158787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.949 [2024-07-13 00:24:47.158816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.950 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.950 [2024-07-13 00:24:47.169484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.950 [2024-07-13 00:24:47.169514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.950 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.185716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.185744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.196245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.196274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.204148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.204177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.215468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.215497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.227715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.227744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.236269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.236300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.253193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.253223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.269404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.269433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.280508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.280540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.296773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.296803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.313567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.313597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.324731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.324762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.332868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.332914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.343893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.343924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.355905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.355934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.364159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.209 [2024-07-13 00:24:47.364188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.209 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.209 [2024-07-13 00:24:47.375006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.210 [2024-07-13 00:24:47.375036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.210 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.210 [2024-07-13 00:24:47.383503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.210 [2024-07-13 00:24:47.383533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.210 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.210 [2024-07-13 00:24:47.392751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.210 [2024-07-13 00:24:47.392791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.210 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.210 [2024-07-13 00:24:47.402968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.210 [2024-07-13 00:24:47.402999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.210 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.210 [2024-07-13 00:24:47.411605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.210 [2024-07-13 00:24:47.411645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.210 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.210 [2024-07-13 00:24:47.423803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.210 [2024-07-13 00:24:47.423832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.210 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.210 [2024-07-13 00:24:47.435767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.210 [2024-07-13 00:24:47.435797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.469 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.469 [2024-07-13 00:24:47.444352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.469 [2024-07-13 00:24:47.444383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.469 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.469 [2024-07-13 00:24:47.459143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.469 [2024-07-13 00:24:47.459174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.469 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.469 [2024-07-13 00:24:47.467415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.469 [2024-07-13 00:24:47.467445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.469 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.469 [2024-07-13 00:24:47.482750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.469 [2024-07-13 00:24:47.482779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.469 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.469 [2024-07-13 00:24:47.491193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.469 [2024-07-13 00:24:47.491223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.469 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.469 [2024-07-13 00:24:47.500490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.469 [2024-07-13 00:24:47.500522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.469 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.469 [2024-07-13 00:24:47.509644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.469 [2024-07-13 00:24:47.509687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.469 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.469 [2024-07-13 00:24:47.518350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.469 [2024-07-13 00:24:47.518380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.469 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.469 [2024-07-13 00:24:47.527162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.469 [2024-07-13 00:24:47.527193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.469 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.469 [2024-07-13 00:24:47.536338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.469 [2024-07-13 00:24:47.536367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.469 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.469 [2024-07-13 00:24:47.545678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.545708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.554913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.554943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.563939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.563969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.577593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.577640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.585127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.585156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.601088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.601118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.609899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.609929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.621080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.621111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.637831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.637860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.648718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.648749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.657322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.657352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.667570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.667601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.676652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.676682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.470 [2024-07-13 00:24:47.690425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.470 [2024-07-13 00:24:47.690455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.470 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.699877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.699938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.709366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.709396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.718594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.718643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.728125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.728156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.737696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.737734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.746923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.746953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.755980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.756010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.765284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.765315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.774684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.774712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.787496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.787526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.795828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.795856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.806954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.806989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.818807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.818837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.827181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.827212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.836605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.836648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.845982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.846012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.855391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.855421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.865096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.865127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.874250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.874280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.884931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.884962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.893279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.893309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.902949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.902989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.912192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.912223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.921505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.730 [2024-07-13 00:24:47.921536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.730 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.730 [2024-07-13 00:24:47.931076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.731 [2024-07-13 00:24:47.931106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.731 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.731 [2024-07-13 00:24:47.940971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.731 [2024-07-13 00:24:47.941001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.731 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.731 [2024-07-13 00:24:47.950575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.731 [2024-07-13 00:24:47.950604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.731 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:47.961371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:47.961402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.990 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:47.971459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:47.971489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.990 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:47.984015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:47.984047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.990 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:47.995430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:47.995461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.990 2024/07/13 00:24:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:48.012143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:48.012174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.990 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:48.022598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:48.022659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.990 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:48.038162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:48.038192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.990 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:48.048415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:48.048445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.990 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:48.057105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:48.057135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.990 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:48.066482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:48.066512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.990 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:48.076275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:48.076306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.990 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.990 [2024-07-13 00:24:48.085831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.990 [2024-07-13 00:24:48.085861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.100188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.100218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.111638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.111665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.127652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.127683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.138686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.138717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.146828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.146857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.158306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.158337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.169035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.169065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.176882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.176939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.187984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.188015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.196737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.196768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.207050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.207081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.991 [2024-07-13 00:24:48.215960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.991 [2024-07-13 00:24:48.215990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.991 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.226907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.226939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.238441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.238472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.255009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.255039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.266063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.266097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.281786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.281814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.292430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.292483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.300361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.300390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.311916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.311945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.322470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.322501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.330205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.330235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.342050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.342080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.351003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.351033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.361883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.361913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.370104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.370133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.380995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.381025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.390046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.390076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.399580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.399610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.408679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.408714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.421358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.421387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.429971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.430002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.439238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.439268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.448268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.448297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.457484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.457514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.466705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.466734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.251 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.251 [2024-07-13 00:24:48.477469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.251 [2024-07-13 00:24:48.477499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.511 [2024-07-13 00:24:48.489043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-07-13 00:24:48.489073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.511 [2024-07-13 00:24:48.497440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-07-13 00:24:48.497470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.511 [2024-07-13 00:24:48.508728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.511 [2024-07-13 00:24:48.508758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.511 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.525460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.525490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.536707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.536742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.554084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.554114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.564735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.564764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.581484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.581514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.593069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.593099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.609485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.609514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.620206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.620235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.629203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.629233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.638304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.638341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.652066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.652096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.660130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.660159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.672072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.672101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.681440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.681469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.692376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.692407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.701521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.701550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.710975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.711006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.720023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.720052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.729007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.729038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.512 [2024-07-13 00:24:48.738175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.512 [2024-07-13 00:24:48.738203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.512 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.747977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.748005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.757410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.757440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.767044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.767074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.775812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.775841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.784892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.784921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.793813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.793842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.802855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.802885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.811892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.811921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.821365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.821395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.830661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.830690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.839885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.839915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.848704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.848736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.858262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.858293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.867384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.867414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.876496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.876526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.886173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.886202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.895329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.895358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.904409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.904438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.913716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.913745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.922640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.922669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.936480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.936535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.945113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.945145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.956078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.956108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.967877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.967909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.772 [2024-07-13 00:24:48.983913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.772 [2024-07-13 00:24:48.983946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.772 2024/07/13 00:24:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.001918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.001949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.011573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.011603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.025305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.025335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.034174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.034203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.048387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.048416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.057273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.057305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.066936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.066967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.076357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.076387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.085717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.085747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.096166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.096196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.113669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.113697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.124977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.125008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.133238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.133268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.143177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.143207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.152624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.152664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.032 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.032 [2024-07-13 00:24:49.161710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.032 [2024-07-13 00:24:49.161740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.033 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.033 [2024-07-13 00:24:49.170935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.033 [2024-07-13 00:24:49.170976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.033 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.033 [2024-07-13 00:24:49.180425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.033 [2024-07-13 00:24:49.180478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.033 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.033 [2024-07-13 00:24:49.190105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.033 [2024-07-13 00:24:49.190136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.033 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.033 [2024-07-13 00:24:49.199360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.033 [2024-07-13 00:24:49.199391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.033 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.033 [2024-07-13 00:24:49.208795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.033 [2024-07-13 00:24:49.208825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.033 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.033 [2024-07-13 00:24:49.218635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.033 [2024-07-13 00:24:49.218664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.033 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.033 [2024-07-13 00:24:49.228297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.033 [2024-07-13 00:24:49.228327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.033 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.033 [2024-07-13 00:24:49.238120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.033 [2024-07-13 00:24:49.238161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.033 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.033 [2024-07-13 00:24:49.247757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.033 [2024-07-13 00:24:49.247787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.033 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.292 [2024-07-13 00:24:49.262021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.292 [2024-07-13 00:24:49.262055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.292 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.292 [2024-07-13 00:24:49.270677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.292 [2024-07-13 00:24:49.270707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.292 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.292 [2024-07-13 00:24:49.283579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.292 [2024-07-13 00:24:49.283610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.292 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.292 [2024-07-13 00:24:49.299976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.292 [2024-07-13 00:24:49.300019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.292 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.292 [2024-07-13 00:24:49.317219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.292 [2024-07-13 00:24:49.317249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.292 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.292 [2024-07-13 00:24:49.333309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.292 [2024-07-13 00:24:49.333352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.292 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.292 [2024-07-13 00:24:49.344229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.292 [2024-07-13 00:24:49.344274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.292 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.292 [2024-07-13 00:24:49.352612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.292 [2024-07-13 00:24:49.352654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.292 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.292 [2024-07-13 00:24:49.364827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.292 [2024-07-13 00:24:49.364859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.292 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.292 [2024-07-13 00:24:49.381032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.292 [2024-07-13 00:24:49.381073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.398077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.398107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.408810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.408841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.417516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.417546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.428499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.428530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.439723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.439752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.447497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.447527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.459482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.459512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.470404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.470437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.478538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.478569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.490358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.490388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.500934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.500964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.509130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.509160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.293 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.293 [2024-07-13 00:24:49.521267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.293 [2024-07-13 00:24:49.521297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.552 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.552 [2024-07-13 00:24:49.531171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.552 [2024-07-13 00:24:49.531201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.552 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.552 [2024-07-13 00:24:49.540518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.552 [2024-07-13 00:24:49.540551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.552 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.552 [2024-07-13 00:24:49.550166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.552 [2024-07-13 00:24:49.550197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.552 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.552 [2024-07-13 00:24:49.559637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.552 [2024-07-13 00:24:49.559667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.552 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.552 [2024-07-13 00:24:49.569348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.552 [2024-07-13 00:24:49.569378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.552 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.552 [2024-07-13 00:24:49.578813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.552 [2024-07-13 00:24:49.578843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.552 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.552 [2024-07-13 00:24:49.588109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.552 [2024-07-13 00:24:49.588139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.597533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.597563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.606882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.606912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.616075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.616105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.625270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.625302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.635847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.635876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.653538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.653580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.669186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.669216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.680265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.680297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.696428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.696502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.707677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.707707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.716269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.716299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.725859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.725889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.735242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.735272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.744922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.744953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.754086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.754117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.763719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.763749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.553 [2024-07-13 00:24:49.773126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.553 [2024-07-13 00:24:49.773156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.553 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.812 [2024-07-13 00:24:49.783590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.812 [2024-07-13 00:24:49.783634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.812 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.812 [2024-07-13 00:24:49.796722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.812 [2024-07-13 00:24:49.796754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.812 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.812 [2024-07-13 00:24:49.807708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.812 [2024-07-13 00:24:49.807738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.812 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.812 [2024-07-13 00:24:49.824181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.812 [2024-07-13 00:24:49.824211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.812 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.812 [2024-07-13 00:24:49.835182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.812 [2024-07-13 00:24:49.835213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.812 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.812 [2024-07-13 00:24:49.844078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.812 [2024-07-13 00:24:49.844109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.853494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.853524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.863403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.863436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.872942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.872972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.884456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.884524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.895318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.895348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.903493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.903523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.915107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.915153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.933451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.933495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.947979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.948009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.963460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.963491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.980576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.980608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:49.995821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:49.995871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:50.006107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:50.006150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:50.016277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:50.016309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.813 [2024-07-13 00:24:50.033988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.813 [2024-07-13 00:24:50.034028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.813 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.072 [2024-07-13 00:24:50.043848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.072 [2024-07-13 00:24:50.043879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.072 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.072 [2024-07-13 00:24:50.058066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.072 [2024-07-13 00:24:50.058096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.072 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.072 [2024-07-13 00:24:50.066554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.072 [2024-07-13 00:24:50.066584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.072 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.072 [2024-07-13 00:24:50.078253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.072 [2024-07-13 00:24:50.078285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.072 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.072 [2024-07-13 00:24:50.089074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.072 [2024-07-13 00:24:50.089105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.072 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.072 [2024-07-13 00:24:50.105509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.072 [2024-07-13 00:24:50.105542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.115557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.115589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 00:16:03.073 Latency(us) 00:16:03.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.073 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:03.073 Nvme1n1 : 5.01 13370.05 104.45 0.00 0.00 9561.30 4200.26 23473.80 00:16:03.073 =================================================================================================================== 00:16:03.073 Total : 13370.05 104.45 0.00 0.00 9561.30 4200.26 23473.80 00:16:03.073 [2024-07-13 00:24:50.125350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.125377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.133343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.133371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.141335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.141358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.149337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.149358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.157337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.157360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.165339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.165362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.173341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.173364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.181343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.181366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.189344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.189367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.197346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.197370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.205348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.205372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.213350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.213373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.221352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.221375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.229356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.229379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.237357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.237380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.249367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.249394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.257373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.257397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.265372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.265395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.273376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.273399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.281377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.281400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.289378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.289401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.073 [2024-07-13 00:24:50.297389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.073 [2024-07-13 00:24:50.297413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.073 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.332 [2024-07-13 00:24:50.305420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.332 [2024-07-13 00:24:50.305460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.332 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.332 [2024-07-13 00:24:50.313389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.332 [2024-07-13 00:24:50.313411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.332 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.332 [2024-07-13 00:24:50.321391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.332 [2024-07-13 00:24:50.321414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.332 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.332 [2024-07-13 00:24:50.329394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.332 [2024-07-13 00:24:50.329416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.332 2024/07/13 00:24:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.332 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (85879) - No such process 00:16:03.332 00:24:50 -- target/zcopy.sh@49 -- # wait 85879 00:16:03.333 00:24:50 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:03.333 00:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.333 00:24:50 -- common/autotest_common.sh@10 -- # set +x 00:16:03.333 00:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.333 00:24:50 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:03.333 00:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.333 00:24:50 -- common/autotest_common.sh@10 -- # set +x 00:16:03.333 delay0 00:16:03.333 00:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.333 00:24:50 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:03.333 00:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.333 00:24:50 -- common/autotest_common.sh@10 -- # set +x 00:16:03.333 00:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.333 00:24:50 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:03.333 [2024-07-13 00:24:50.535518] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:09.896 Initializing NVMe Controllers 00:16:09.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:09.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:09.896 Initialization complete. Launching workers. 00:16:09.896 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 81 00:16:09.896 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 368, failed to submit 33 00:16:09.896 success 174, unsuccess 194, failed 0 00:16:09.896 00:24:56 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:09.896 00:24:56 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:09.896 00:24:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:09.896 00:24:56 -- nvmf/common.sh@116 -- # sync 00:16:09.896 00:24:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:09.896 00:24:56 -- nvmf/common.sh@119 -- # set +e 00:16:09.896 00:24:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:09.896 00:24:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:09.896 rmmod nvme_tcp 00:16:09.896 rmmod nvme_fabrics 00:16:09.896 rmmod nvme_keyring 00:16:09.896 00:24:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:09.896 00:24:56 -- nvmf/common.sh@123 -- # set -e 00:16:09.896 00:24:56 -- nvmf/common.sh@124 -- # return 0 00:16:09.896 00:24:56 -- nvmf/common.sh@477 -- # '[' -n 85713 ']' 00:16:09.896 00:24:56 -- nvmf/common.sh@478 -- # killprocess 85713 00:16:09.896 00:24:56 -- common/autotest_common.sh@926 -- # '[' -z 85713 ']' 00:16:09.896 00:24:56 -- common/autotest_common.sh@930 -- # kill -0 85713 00:16:09.896 00:24:56 -- common/autotest_common.sh@931 -- # uname 00:16:09.896 00:24:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.896 00:24:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85713 00:16:09.896 00:24:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:09.896 killing process with pid 85713 00:16:09.896 00:24:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:09.896 00:24:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85713' 00:16:09.896 00:24:56 -- common/autotest_common.sh@945 -- # kill 85713 00:16:09.896 00:24:56 -- common/autotest_common.sh@950 -- # wait 85713 00:16:09.896 00:24:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:09.896 00:24:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:09.896 00:24:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:09.896 00:24:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:09.896 00:24:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:09.896 00:24:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.896 00:24:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.896 00:24:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.896 00:24:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:09.896 00:16:09.896 real 0m24.690s 00:16:09.896 user 0m39.350s 00:16:09.896 sys 0m6.933s 00:16:09.896 00:24:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.896 ************************************ 00:16:09.896 END TEST nvmf_zcopy 00:16:09.896 ************************************ 00:16:09.896 00:24:57 -- common/autotest_common.sh@10 -- # set +x 00:16:09.896 00:24:57 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:09.896 00:24:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:09.896 00:24:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:09.896 00:24:57 -- common/autotest_common.sh@10 -- # set +x 00:16:10.156 ************************************ 00:16:10.156 START TEST nvmf_nmic 00:16:10.156 ************************************ 00:16:10.156 00:24:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:10.156 * Looking for test storage... 00:16:10.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:10.156 00:24:57 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:10.156 00:24:57 -- nvmf/common.sh@7 -- # uname -s 00:16:10.156 00:24:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.156 00:24:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.156 00:24:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.156 00:24:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.156 00:24:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.156 00:24:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.156 00:24:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.156 00:24:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.156 00:24:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.156 00:24:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.156 00:24:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:16:10.156 00:24:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:16:10.156 00:24:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.156 00:24:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.156 00:24:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:10.156 00:24:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.156 00:24:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.156 00:24:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.156 00:24:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.156 00:24:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.156 00:24:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.156 00:24:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.156 00:24:57 -- paths/export.sh@5 -- # export PATH 00:16:10.156 00:24:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.156 00:24:57 -- nvmf/common.sh@46 -- # : 0 00:16:10.156 00:24:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:10.156 00:24:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:10.156 00:24:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:10.156 00:24:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.156 00:24:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.156 00:24:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:10.156 00:24:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:10.156 00:24:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:10.156 00:24:57 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:10.156 00:24:57 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:10.156 00:24:57 -- target/nmic.sh@14 -- # nvmftestinit 00:16:10.156 00:24:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:10.156 00:24:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.156 00:24:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:10.156 00:24:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:10.156 00:24:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:10.156 00:24:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.156 00:24:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.156 00:24:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.156 00:24:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:10.156 00:24:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:10.156 00:24:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:10.156 00:24:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:10.156 00:24:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:10.156 00:24:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:10.156 00:24:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.156 00:24:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.156 00:24:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:10.156 00:24:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:10.157 00:24:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:10.157 00:24:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:10.157 00:24:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:10.157 00:24:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.157 00:24:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:10.157 00:24:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:10.157 00:24:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:10.157 00:24:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:10.157 00:24:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:10.157 00:24:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:10.157 Cannot find device "nvmf_tgt_br" 00:16:10.157 00:24:57 -- nvmf/common.sh@154 -- # true 00:16:10.157 00:24:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:10.157 Cannot find device "nvmf_tgt_br2" 00:16:10.157 00:24:57 -- nvmf/common.sh@155 -- # true 00:16:10.157 00:24:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:10.157 00:24:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:10.157 Cannot find device "nvmf_tgt_br" 00:16:10.157 00:24:57 -- nvmf/common.sh@157 -- # true 00:16:10.157 00:24:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:10.157 Cannot find device "nvmf_tgt_br2" 00:16:10.157 00:24:57 -- nvmf/common.sh@158 -- # true 00:16:10.157 00:24:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:10.157 00:24:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:10.157 00:24:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:10.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.157 00:24:57 -- nvmf/common.sh@161 -- # true 00:16:10.157 00:24:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:10.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.157 00:24:57 -- nvmf/common.sh@162 -- # true 00:16:10.157 00:24:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:10.416 00:24:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:10.416 00:24:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:10.416 00:24:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:10.416 00:24:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:10.416 00:24:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:10.416 00:24:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:10.416 00:24:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:10.416 00:24:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:10.416 00:24:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:10.416 00:24:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:10.416 00:24:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:10.416 00:24:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:10.416 00:24:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:10.416 00:24:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:10.416 00:24:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:10.416 00:24:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:10.416 00:24:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:10.416 00:24:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:10.416 00:24:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:10.416 00:24:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:10.416 00:24:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:10.416 00:24:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:10.416 00:24:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:10.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:16:10.416 00:16:10.416 --- 10.0.0.2 ping statistics --- 00:16:10.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.416 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:16:10.416 00:24:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:10.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:10.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:16:10.416 00:16:10.416 --- 10.0.0.3 ping statistics --- 00:16:10.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.416 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:10.416 00:24:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:10.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:10.416 00:16:10.416 --- 10.0.0.1 ping statistics --- 00:16:10.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.416 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:10.416 00:24:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.416 00:24:57 -- nvmf/common.sh@421 -- # return 0 00:16:10.416 00:24:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:10.416 00:24:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.416 00:24:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:10.416 00:24:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:10.416 00:24:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.416 00:24:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:10.416 00:24:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:10.416 00:24:57 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:10.416 00:24:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:10.416 00:24:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:10.416 00:24:57 -- common/autotest_common.sh@10 -- # set +x 00:16:10.416 00:24:57 -- nvmf/common.sh@469 -- # nvmfpid=86205 00:16:10.416 00:24:57 -- nvmf/common.sh@470 -- # waitforlisten 86205 00:16:10.416 00:24:57 -- common/autotest_common.sh@819 -- # '[' -z 86205 ']' 00:16:10.416 00:24:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.416 00:24:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:10.416 00:24:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:10.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.416 00:24:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.416 00:24:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:10.416 00:24:57 -- common/autotest_common.sh@10 -- # set +x 00:16:10.675 [2024-07-13 00:24:57.658503] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:10.675 [2024-07-13 00:24:57.658604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.675 [2024-07-13 00:24:57.795751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.675 [2024-07-13 00:24:57.879015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:10.675 [2024-07-13 00:24:57.879188] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.675 [2024-07-13 00:24:57.879202] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.675 [2024-07-13 00:24:57.879210] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.675 [2024-07-13 00:24:57.879375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.675 [2024-07-13 00:24:57.879546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.675 [2024-07-13 00:24:57.880039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.675 [2024-07-13 00:24:57.880087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.612 00:24:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:11.612 00:24:58 -- common/autotest_common.sh@852 -- # return 0 00:16:11.612 00:24:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:11.612 00:24:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:11.612 00:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 00:24:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.612 00:24:58 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:11.612 00:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.612 00:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 [2024-07-13 00:24:58.638329] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.612 00:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.612 00:24:58 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:11.612 00:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.612 00:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 Malloc0 00:16:11.612 00:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.612 00:24:58 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:11.612 00:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.612 00:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 00:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.612 00:24:58 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:11.612 00:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.612 00:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 00:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.612 00:24:58 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.612 00:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.612 00:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 [2024-07-13 00:24:58.713969] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.612 00:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.612 test case1: single bdev can't be used in multiple subsystems 00:16:11.612 00:24:58 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:11.612 00:24:58 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:11.612 00:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.612 00:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 00:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.612 00:24:58 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:11.612 00:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.612 00:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 00:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.612 00:24:58 -- target/nmic.sh@28 -- # nmic_status=0 00:16:11.612 00:24:58 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:11.612 00:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.612 00:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 [2024-07-13 00:24:58.737846] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:11.612 [2024-07-13 00:24:58.737882] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:11.612 [2024-07-13 00:24:58.737894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.612 2024/07/13 00:24:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.612 request: 00:16:11.612 { 00:16:11.612 "method": "nvmf_subsystem_add_ns", 00:16:11.612 "params": { 00:16:11.612 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:11.612 "namespace": { 00:16:11.612 "bdev_name": "Malloc0" 00:16:11.612 } 00:16:11.612 } 00:16:11.612 } 00:16:11.612 Got JSON-RPC error response 00:16:11.612 GoRPCClient: error on JSON-RPC call 00:16:11.612 00:24:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:11.612 00:24:58 -- target/nmic.sh@29 -- # nmic_status=1 00:16:11.612 00:24:58 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:11.612 Adding namespace failed - expected result. 00:16:11.612 00:24:58 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:11.612 test case2: host connect to nvmf target in multiple paths 00:16:11.612 00:24:58 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:11.612 00:24:58 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:11.612 00:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.612 00:24:58 -- common/autotest_common.sh@10 -- # set +x 00:16:11.612 [2024-07-13 00:24:58.750022] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:11.612 00:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.612 00:24:58 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:11.871 00:24:58 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:11.871 00:24:59 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:11.871 00:24:59 -- common/autotest_common.sh@1177 -- # local i=0 00:16:11.871 00:24:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.871 00:24:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:11.871 00:24:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:14.414 00:25:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:14.414 00:25:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:14.414 00:25:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.414 00:25:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:14.414 00:25:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.414 00:25:01 -- common/autotest_common.sh@1187 -- # return 0 00:16:14.414 00:25:01 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:14.414 [global] 00:16:14.414 thread=1 00:16:14.414 invalidate=1 00:16:14.414 rw=write 00:16:14.414 time_based=1 00:16:14.414 runtime=1 00:16:14.414 ioengine=libaio 00:16:14.414 direct=1 00:16:14.414 bs=4096 00:16:14.414 iodepth=1 00:16:14.414 norandommap=0 00:16:14.414 numjobs=1 00:16:14.414 00:16:14.414 verify_dump=1 00:16:14.414 verify_backlog=512 00:16:14.414 verify_state_save=0 00:16:14.414 do_verify=1 00:16:14.414 verify=crc32c-intel 00:16:14.414 [job0] 00:16:14.414 filename=/dev/nvme0n1 00:16:14.414 Could not set queue depth (nvme0n1) 00:16:14.414 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.414 fio-3.35 00:16:14.414 Starting 1 thread 00:16:15.374 00:16:15.374 job0: (groupid=0, jobs=1): err= 0: pid=86315: Sat Jul 13 00:25:02 2024 00:16:15.374 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:15.374 slat (nsec): min=12995, max=76966, avg=15754.56, stdev=4750.56 00:16:15.374 clat (usec): min=123, max=340, avg=159.46, stdev=19.31 00:16:15.374 lat (usec): min=138, max=354, avg=175.22, stdev=20.03 00:16:15.374 clat percentiles (usec): 00:16:15.374 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:16:15.374 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:16:15.374 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 196], 00:16:15.374 | 99.00th=[ 221], 99.50th=[ 231], 99.90th=[ 245], 99.95th=[ 251], 00:16:15.374 | 99.99th=[ 343] 00:16:15.374 write: IOPS=3250, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:16:15.374 slat (usec): min=18, max=119, avg=24.33, stdev= 8.51 00:16:15.374 clat (usec): min=71, max=666, avg=114.14, stdev=22.30 00:16:15.374 lat (usec): min=108, max=694, avg=138.47, stdev=24.05 00:16:15.374 clat percentiles (usec): 00:16:15.374 | 1.00th=[ 93], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 101], 00:16:15.374 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 110], 60.00th=[ 113], 00:16:15.374 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 137], 95.00th=[ 147], 00:16:15.374 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 404], 99.95th=[ 502], 00:16:15.374 | 99.99th=[ 668] 00:16:15.374 bw ( KiB/s): min=13160, max=13160, per=100.00%, avg=13160.00, stdev= 0.00, samples=1 00:16:15.374 iops : min= 3290, max= 3290, avg=3290.00, stdev= 0.00, samples=1 00:16:15.374 lat (usec) : 100=7.82%, 250=92.03%, 500=0.11%, 750=0.03% 00:16:15.374 cpu : usr=1.80%, sys=9.50%, ctx=6326, majf=0, minf=2 00:16:15.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.375 issued rwts: total=3072,3254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:15.375 00:16:15.375 Run status group 0 (all jobs): 00:16:15.375 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:16:15.375 WRITE: bw=12.7MiB/s (13.3MB/s), 12.7MiB/s-12.7MiB/s (13.3MB/s-13.3MB/s), io=12.7MiB (13.3MB), run=1001-1001msec 00:16:15.375 00:16:15.375 Disk stats (read/write): 00:16:15.375 nvme0n1: ios=2693/3072, merge=0/0, ticks=469/393, in_queue=862, util=91.18% 00:16:15.375 00:25:02 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:15.375 00:25:02 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:15.375 00:25:02 -- common/autotest_common.sh@1198 -- # local i=0 00:16:15.375 00:25:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:15.375 00:25:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.375 00:25:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:15.375 00:25:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.375 00:25:02 -- common/autotest_common.sh@1210 -- # return 0 00:16:15.375 00:25:02 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:15.375 00:25:02 -- target/nmic.sh@53 -- # nvmftestfini 00:16:15.375 00:25:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:15.375 00:25:02 -- nvmf/common.sh@116 -- # sync 00:16:15.375 00:25:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:15.375 00:25:02 -- nvmf/common.sh@119 -- # set +e 00:16:15.375 00:25:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:15.375 00:25:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:15.375 rmmod nvme_tcp 00:16:15.375 rmmod nvme_fabrics 00:16:15.375 rmmod nvme_keyring 00:16:15.642 00:25:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:15.642 00:25:02 -- nvmf/common.sh@123 -- # set -e 00:16:15.642 00:25:02 -- nvmf/common.sh@124 -- # return 0 00:16:15.642 00:25:02 -- nvmf/common.sh@477 -- # '[' -n 86205 ']' 00:16:15.642 00:25:02 -- nvmf/common.sh@478 -- # killprocess 86205 00:16:15.642 00:25:02 -- common/autotest_common.sh@926 -- # '[' -z 86205 ']' 00:16:15.642 00:25:02 -- common/autotest_common.sh@930 -- # kill -0 86205 00:16:15.642 00:25:02 -- common/autotest_common.sh@931 -- # uname 00:16:15.642 00:25:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:15.642 00:25:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86205 00:16:15.642 00:25:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:15.642 killing process with pid 86205 00:16:15.642 00:25:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:15.642 00:25:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86205' 00:16:15.642 00:25:02 -- common/autotest_common.sh@945 -- # kill 86205 00:16:15.642 00:25:02 -- common/autotest_common.sh@950 -- # wait 86205 00:16:15.899 00:25:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:15.899 00:25:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:15.899 00:25:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:15.899 00:25:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.899 00:25:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:15.899 00:25:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.899 00:25:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.899 00:25:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.899 00:25:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:15.899 00:16:15.899 real 0m5.780s 00:16:15.899 user 0m19.695s 00:16:15.899 sys 0m1.200s 00:16:15.899 00:25:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.899 00:25:02 -- common/autotest_common.sh@10 -- # set +x 00:16:15.899 ************************************ 00:16:15.899 END TEST nvmf_nmic 00:16:15.899 ************************************ 00:16:15.899 00:25:02 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:15.899 00:25:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:15.899 00:25:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:15.899 00:25:02 -- common/autotest_common.sh@10 -- # set +x 00:16:15.899 ************************************ 00:16:15.899 START TEST nvmf_fio_target 00:16:15.899 ************************************ 00:16:15.899 00:25:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:15.899 * Looking for test storage... 00:16:15.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:15.899 00:25:03 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:15.899 00:25:03 -- nvmf/common.sh@7 -- # uname -s 00:16:15.899 00:25:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.899 00:25:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.899 00:25:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.899 00:25:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.899 00:25:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.899 00:25:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.899 00:25:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.899 00:25:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.899 00:25:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.899 00:25:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.899 00:25:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:16:15.899 00:25:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:16:15.899 00:25:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.899 00:25:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.899 00:25:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:15.899 00:25:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.899 00:25:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.899 00:25:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.899 00:25:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.899 00:25:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.900 00:25:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.900 00:25:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.900 00:25:03 -- paths/export.sh@5 -- # export PATH 00:16:15.900 00:25:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.900 00:25:03 -- nvmf/common.sh@46 -- # : 0 00:16:15.900 00:25:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:15.900 00:25:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:15.900 00:25:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:15.900 00:25:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.900 00:25:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.900 00:25:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:15.900 00:25:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:15.900 00:25:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:15.900 00:25:03 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:15.900 00:25:03 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:15.900 00:25:03 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.900 00:25:03 -- target/fio.sh@16 -- # nvmftestinit 00:16:15.900 00:25:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:15.900 00:25:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.900 00:25:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:15.900 00:25:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:15.900 00:25:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:15.900 00:25:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.900 00:25:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.900 00:25:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.900 00:25:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:15.900 00:25:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:15.900 00:25:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:15.900 00:25:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:15.900 00:25:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:15.900 00:25:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:15.900 00:25:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.900 00:25:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.900 00:25:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:15.900 00:25:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:15.900 00:25:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:15.900 00:25:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:15.900 00:25:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:15.900 00:25:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.900 00:25:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:15.900 00:25:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:15.900 00:25:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:15.900 00:25:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:15.900 00:25:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:15.900 00:25:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:15.900 Cannot find device "nvmf_tgt_br" 00:16:15.900 00:25:03 -- nvmf/common.sh@154 -- # true 00:16:15.900 00:25:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:15.900 Cannot find device "nvmf_tgt_br2" 00:16:15.900 00:25:03 -- nvmf/common.sh@155 -- # true 00:16:15.900 00:25:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:15.900 00:25:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:15.900 Cannot find device "nvmf_tgt_br" 00:16:15.900 00:25:03 -- nvmf/common.sh@157 -- # true 00:16:15.900 00:25:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:15.900 Cannot find device "nvmf_tgt_br2" 00:16:15.900 00:25:03 -- nvmf/common.sh@158 -- # true 00:16:15.900 00:25:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:16.158 00:25:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:16.158 00:25:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.158 00:25:03 -- nvmf/common.sh@161 -- # true 00:16:16.158 00:25:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.158 00:25:03 -- nvmf/common.sh@162 -- # true 00:16:16.158 00:25:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.158 00:25:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.158 00:25:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:16.158 00:25:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:16.158 00:25:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.158 00:25:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.158 00:25:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.158 00:25:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.158 00:25:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:16.158 00:25:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:16.158 00:25:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:16.158 00:25:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:16.158 00:25:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:16.158 00:25:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.158 00:25:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:16.158 00:25:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:16.158 00:25:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:16.158 00:25:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:16.158 00:25:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:16.158 00:25:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:16.158 00:25:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:16.158 00:25:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:16.158 00:25:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:16.158 00:25:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:16.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:16.158 00:16:16.158 --- 10.0.0.2 ping statistics --- 00:16:16.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.158 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:16.158 00:25:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:16.158 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:16.158 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:16:16.158 00:16:16.158 --- 10.0.0.3 ping statistics --- 00:16:16.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.158 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:16.158 00:25:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:16.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:16.416 00:16:16.417 --- 10.0.0.1 ping statistics --- 00:16:16.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.417 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:16.417 00:25:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.417 00:25:03 -- nvmf/common.sh@421 -- # return 0 00:16:16.417 00:25:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:16.417 00:25:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.417 00:25:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:16.417 00:25:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:16.417 00:25:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.417 00:25:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:16.417 00:25:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:16.417 00:25:03 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:16.417 00:25:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:16.417 00:25:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:16.417 00:25:03 -- common/autotest_common.sh@10 -- # set +x 00:16:16.417 00:25:03 -- nvmf/common.sh@469 -- # nvmfpid=86491 00:16:16.417 00:25:03 -- nvmf/common.sh@470 -- # waitforlisten 86491 00:16:16.417 00:25:03 -- common/autotest_common.sh@819 -- # '[' -z 86491 ']' 00:16:16.417 00:25:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.417 00:25:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.417 00:25:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.417 00:25:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.417 00:25:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.417 00:25:03 -- common/autotest_common.sh@10 -- # set +x 00:16:16.417 [2024-07-13 00:25:03.474489] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:16.417 [2024-07-13 00:25:03.474572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.417 [2024-07-13 00:25:03.614217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.675 [2024-07-13 00:25:03.698168] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:16.675 [2024-07-13 00:25:03.698312] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.675 [2024-07-13 00:25:03.698325] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.675 [2024-07-13 00:25:03.698334] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.675 [2024-07-13 00:25:03.699318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.675 [2024-07-13 00:25:03.699488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.675 [2024-07-13 00:25:03.699575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.675 [2024-07-13 00:25:03.699704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.610 00:25:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.610 00:25:04 -- common/autotest_common.sh@852 -- # return 0 00:16:17.610 00:25:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:17.610 00:25:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:17.610 00:25:04 -- common/autotest_common.sh@10 -- # set +x 00:16:17.610 00:25:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.610 00:25:04 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:17.610 [2024-07-13 00:25:04.711035] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.610 00:25:04 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:17.868 00:25:04 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:17.868 00:25:04 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.127 00:25:05 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:18.127 00:25:05 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.385 00:25:05 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:18.385 00:25:05 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.643 00:25:05 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:18.643 00:25:05 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:18.901 00:25:06 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:19.466 00:25:06 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:19.466 00:25:06 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:19.466 00:25:06 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:19.466 00:25:06 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:19.725 00:25:06 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:19.725 00:25:06 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:20.290 00:25:07 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:20.290 00:25:07 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:20.290 00:25:07 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.547 00:25:07 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:20.547 00:25:07 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:20.804 00:25:07 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.804 [2024-07-13 00:25:08.033119] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.061 00:25:08 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:21.061 00:25:08 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:21.319 00:25:08 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:21.577 00:25:08 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:21.577 00:25:08 -- common/autotest_common.sh@1177 -- # local i=0 00:16:21.577 00:25:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.577 00:25:08 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:16:21.577 00:25:08 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:16:21.577 00:25:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:23.479 00:25:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:23.479 00:25:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:23.479 00:25:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.479 00:25:10 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:16:23.479 00:25:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.479 00:25:10 -- common/autotest_common.sh@1187 -- # return 0 00:16:23.479 00:25:10 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:23.479 [global] 00:16:23.479 thread=1 00:16:23.479 invalidate=1 00:16:23.479 rw=write 00:16:23.479 time_based=1 00:16:23.479 runtime=1 00:16:23.479 ioengine=libaio 00:16:23.479 direct=1 00:16:23.479 bs=4096 00:16:23.479 iodepth=1 00:16:23.479 norandommap=0 00:16:23.479 numjobs=1 00:16:23.479 00:16:23.479 verify_dump=1 00:16:23.479 verify_backlog=512 00:16:23.479 verify_state_save=0 00:16:23.479 do_verify=1 00:16:23.479 verify=crc32c-intel 00:16:23.479 [job0] 00:16:23.479 filename=/dev/nvme0n1 00:16:23.479 [job1] 00:16:23.479 filename=/dev/nvme0n2 00:16:23.479 [job2] 00:16:23.479 filename=/dev/nvme0n3 00:16:23.479 [job3] 00:16:23.479 filename=/dev/nvme0n4 00:16:23.738 Could not set queue depth (nvme0n1) 00:16:23.738 Could not set queue depth (nvme0n2) 00:16:23.738 Could not set queue depth (nvme0n3) 00:16:23.738 Could not set queue depth (nvme0n4) 00:16:23.738 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:23.738 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:23.738 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:23.738 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:23.738 fio-3.35 00:16:23.738 Starting 4 threads 00:16:25.113 00:16:25.113 job0: (groupid=0, jobs=1): err= 0: pid=86779: Sat Jul 13 00:25:12 2024 00:16:25.113 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:25.113 slat (nsec): min=13089, max=64993, avg=16454.19, stdev=4197.53 00:16:25.113 clat (usec): min=133, max=1650, avg=221.47, stdev=41.76 00:16:25.113 lat (usec): min=147, max=1668, avg=237.93, stdev=41.94 00:16:25.113 clat percentiles (usec): 00:16:25.113 | 1.00th=[ 151], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 202], 00:16:25.113 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 225], 00:16:25.113 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 265], 00:16:25.113 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 338], 99.95th=[ 529], 00:16:25.113 | 99.99th=[ 1647] 00:16:25.113 write: IOPS=2345, BW=9383KiB/s (9608kB/s)(9392KiB/1001msec); 0 zone resets 00:16:25.113 slat (nsec): min=18501, max=98035, avg=25893.93, stdev=6504.03 00:16:25.113 clat (usec): min=95, max=296, avg=188.99, stdev=32.92 00:16:25.113 lat (usec): min=117, max=357, avg=214.89, stdev=34.11 00:16:25.113 clat percentiles (usec): 00:16:25.113 | 1.00th=[ 108], 5.00th=[ 128], 10.00th=[ 143], 20.00th=[ 163], 00:16:25.113 | 30.00th=[ 176], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 200], 00:16:25.113 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 241], 00:16:25.113 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 293], 00:16:25.113 | 99.99th=[ 297] 00:16:25.113 bw ( KiB/s): min= 9288, max= 9288, per=27.00%, avg=9288.00, stdev= 0.00, samples=1 00:16:25.113 iops : min= 2322, max= 2322, avg=2322.00, stdev= 0.00, samples=1 00:16:25.113 lat (usec) : 100=0.14%, 250=92.61%, 500=7.21%, 750=0.02% 00:16:25.113 lat (msec) : 2=0.02% 00:16:25.113 cpu : usr=2.00%, sys=6.50%, ctx=4396, majf=0, minf=10 00:16:25.113 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.113 issued rwts: total=2048,2348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.113 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.113 job1: (groupid=0, jobs=1): err= 0: pid=86780: Sat Jul 13 00:25:12 2024 00:16:25.113 read: IOPS=1990, BW=7960KiB/s (8151kB/s)(7968KiB/1001msec) 00:16:25.113 slat (nsec): min=12663, max=62743, avg=16303.69, stdev=3805.37 00:16:25.113 clat (usec): min=147, max=2908, avg=239.06, stdev=70.45 00:16:25.113 lat (usec): min=161, max=2923, avg=255.37, stdev=70.45 00:16:25.113 clat percentiles (usec): 00:16:25.113 | 1.00th=[ 172], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 208], 00:16:25.113 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 241], 00:16:25.113 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 302], 00:16:25.113 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 906], 99.95th=[ 2900], 00:16:25.113 | 99.99th=[ 2900] 00:16:25.113 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:25.113 slat (usec): min=19, max=114, avg=26.46, stdev= 6.33 00:16:25.113 clat (usec): min=135, max=382, avg=209.85, stdev=27.05 00:16:25.113 lat (usec): min=159, max=404, avg=236.31, stdev=27.23 00:16:25.113 clat percentiles (usec): 00:16:25.113 | 1.00th=[ 151], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 188], 00:16:25.113 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 215], 00:16:25.113 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 260], 00:16:25.113 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 330], 99.95th=[ 334], 00:16:25.113 | 99.99th=[ 383] 00:16:25.113 bw ( KiB/s): min= 8192, max= 8192, per=23.82%, avg=8192.00, stdev= 0.00, samples=1 00:16:25.113 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:25.113 lat (usec) : 250=80.99%, 500=18.96%, 1000=0.02% 00:16:25.113 lat (msec) : 4=0.02% 00:16:25.113 cpu : usr=2.10%, sys=6.00%, ctx=4040, majf=0, minf=7 00:16:25.113 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.113 issued rwts: total=1992,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.113 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.113 job2: (groupid=0, jobs=1): err= 0: pid=86781: Sat Jul 13 00:25:12 2024 00:16:25.113 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:25.113 slat (nsec): min=13664, max=64349, avg=17646.44, stdev=4504.65 00:16:25.113 clat (usec): min=151, max=342, avg=224.92, stdev=27.37 00:16:25.113 lat (usec): min=166, max=360, avg=242.56, stdev=27.56 00:16:25.113 clat percentiles (usec): 00:16:25.113 | 1.00th=[ 165], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 206], 00:16:25.113 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:16:25.113 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 260], 95.00th=[ 277], 00:16:25.113 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 330], 99.95th=[ 334], 00:16:25.113 | 99.99th=[ 343] 00:16:25.113 write: IOPS=2161, BW=8647KiB/s (8855kB/s)(8656KiB/1001msec); 0 zone resets 00:16:25.113 slat (nsec): min=20168, max=94954, avg=27648.94, stdev=6402.03 00:16:25.113 clat (usec): min=119, max=433, avg=201.01, stdev=29.27 00:16:25.113 lat (usec): min=142, max=468, avg=228.66, stdev=29.75 00:16:25.113 clat percentiles (usec): 00:16:25.113 | 1.00th=[ 135], 5.00th=[ 153], 10.00th=[ 165], 20.00th=[ 180], 00:16:25.113 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:16:25.113 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 239], 95.00th=[ 251], 00:16:25.113 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 326], 99.95th=[ 400], 00:16:25.113 | 99.99th=[ 433] 00:16:25.113 bw ( KiB/s): min= 8728, max= 8728, per=25.37%, avg=8728.00, stdev= 0.00, samples=1 00:16:25.113 iops : min= 2182, max= 2182, avg=2182.00, stdev= 0.00, samples=1 00:16:25.113 lat (usec) : 250=89.77%, 500=10.23% 00:16:25.113 cpu : usr=1.80%, sys=6.90%, ctx=4212, majf=0, minf=11 00:16:25.113 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.113 issued rwts: total=2048,2164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.113 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.113 job3: (groupid=0, jobs=1): err= 0: pid=86782: Sat Jul 13 00:25:12 2024 00:16:25.113 read: IOPS=1990, BW=7960KiB/s (8151kB/s)(7968KiB/1001msec) 00:16:25.113 slat (nsec): min=12610, max=61934, avg=16337.12, stdev=4166.59 00:16:25.113 clat (usec): min=164, max=412, avg=237.64, stdev=33.27 00:16:25.113 lat (usec): min=180, max=425, avg=253.97, stdev=33.67 00:16:25.113 clat percentiles (usec): 00:16:25.113 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 210], 00:16:25.113 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 239], 00:16:25.113 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 302], 00:16:25.113 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 408], 99.95th=[ 412], 00:16:25.113 | 99.99th=[ 412] 00:16:25.113 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:25.113 slat (nsec): min=18731, max=98424, avg=25424.42, stdev=6311.77 00:16:25.113 clat (usec): min=137, max=381, avg=211.85, stdev=26.48 00:16:25.113 lat (usec): min=158, max=403, avg=237.28, stdev=26.82 00:16:25.113 clat percentiles (usec): 00:16:25.113 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 192], 00:16:25.113 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 217], 00:16:25.113 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 262], 00:16:25.113 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 355], 99.95th=[ 359], 00:16:25.113 | 99.99th=[ 383] 00:16:25.113 bw ( KiB/s): min= 8208, max= 8208, per=23.86%, avg=8208.00, stdev= 0.00, samples=1 00:16:25.113 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:16:25.113 lat (usec) : 250=81.63%, 500=18.37% 00:16:25.113 cpu : usr=2.30%, sys=5.60%, ctx=4040, majf=0, minf=13 00:16:25.113 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.113 issued rwts: total=1992,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.113 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.113 00:16:25.113 Run status group 0 (all jobs): 00:16:25.113 READ: bw=31.5MiB/s (33.1MB/s), 7960KiB/s-8184KiB/s (8151kB/s-8380kB/s), io=31.6MiB (33.1MB), run=1001-1001msec 00:16:25.113 WRITE: bw=33.6MiB/s (35.2MB/s), 8184KiB/s-9383KiB/s (8380kB/s-9608kB/s), io=33.6MiB (35.3MB), run=1001-1001msec 00:16:25.113 00:16:25.113 Disk stats (read/write): 00:16:25.113 nvme0n1: ios=1775/2048, merge=0/0, ticks=410/417, in_queue=827, util=87.78% 00:16:25.113 nvme0n2: ios=1580/1977, merge=0/0, ticks=398/423, in_queue=821, util=88.26% 00:16:25.113 nvme0n3: ios=1646/2048, merge=0/0, ticks=448/436, in_queue=884, util=92.98% 00:16:25.113 nvme0n4: ios=1589/1981, merge=0/0, ticks=468/425, in_queue=893, util=93.25% 00:16:25.113 00:25:12 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:25.113 [global] 00:16:25.113 thread=1 00:16:25.113 invalidate=1 00:16:25.113 rw=randwrite 00:16:25.113 time_based=1 00:16:25.113 runtime=1 00:16:25.113 ioengine=libaio 00:16:25.113 direct=1 00:16:25.113 bs=4096 00:16:25.113 iodepth=1 00:16:25.113 norandommap=0 00:16:25.113 numjobs=1 00:16:25.113 00:16:25.113 verify_dump=1 00:16:25.113 verify_backlog=512 00:16:25.113 verify_state_save=0 00:16:25.113 do_verify=1 00:16:25.113 verify=crc32c-intel 00:16:25.113 [job0] 00:16:25.113 filename=/dev/nvme0n1 00:16:25.113 [job1] 00:16:25.113 filename=/dev/nvme0n2 00:16:25.113 [job2] 00:16:25.113 filename=/dev/nvme0n3 00:16:25.113 [job3] 00:16:25.113 filename=/dev/nvme0n4 00:16:25.113 Could not set queue depth (nvme0n1) 00:16:25.113 Could not set queue depth (nvme0n2) 00:16:25.113 Could not set queue depth (nvme0n3) 00:16:25.113 Could not set queue depth (nvme0n4) 00:16:25.113 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.113 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.113 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.113 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.113 fio-3.35 00:16:25.113 Starting 4 threads 00:16:26.488 00:16:26.488 job0: (groupid=0, jobs=1): err= 0: pid=86840: Sat Jul 13 00:25:13 2024 00:16:26.488 read: IOPS=1117, BW=4472KiB/s (4579kB/s)(4476KiB/1001msec) 00:16:26.488 slat (usec): min=14, max=105, avg=31.02, stdev=10.74 00:16:26.488 clat (usec): min=209, max=838, avg=393.29, stdev=62.59 00:16:26.488 lat (usec): min=239, max=857, avg=424.32, stdev=60.07 00:16:26.488 clat percentiles (usec): 00:16:26.488 | 1.00th=[ 293], 5.00th=[ 322], 10.00th=[ 334], 20.00th=[ 351], 00:16:26.488 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 379], 60.00th=[ 396], 00:16:26.488 | 70.00th=[ 416], 80.00th=[ 433], 90.00th=[ 461], 95.00th=[ 498], 00:16:26.488 | 99.00th=[ 644], 99.50th=[ 685], 99.90th=[ 799], 99.95th=[ 840], 00:16:26.488 | 99.99th=[ 840] 00:16:26.488 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:26.488 slat (usec): min=26, max=427, avg=40.72, stdev=16.18 00:16:26.488 clat (usec): min=35, max=747, avg=295.44, stdev=67.19 00:16:26.488 lat (usec): min=169, max=789, avg=336.16, stdev=65.60 00:16:26.488 clat percentiles (usec): 00:16:26.488 | 1.00th=[ 172], 5.00th=[ 204], 10.00th=[ 221], 20.00th=[ 241], 00:16:26.488 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 302], 00:16:26.488 | 70.00th=[ 322], 80.00th=[ 355], 90.00th=[ 392], 95.00th=[ 416], 00:16:26.488 | 99.00th=[ 465], 99.50th=[ 478], 99.90th=[ 510], 99.95th=[ 750], 00:16:26.488 | 99.99th=[ 750] 00:16:26.488 bw ( KiB/s): min= 6920, max= 6920, per=23.43%, avg=6920.00, stdev= 0.00, samples=1 00:16:26.488 iops : min= 1730, max= 1730, avg=1730.00, stdev= 0.00, samples=1 00:16:26.488 lat (usec) : 50=0.04%, 250=15.63%, 500=82.18%, 750=2.03%, 1000=0.11% 00:16:26.488 cpu : usr=1.40%, sys=7.70%, ctx=2656, majf=0, minf=12 00:16:26.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.488 issued rwts: total=1119,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:26.488 job1: (groupid=0, jobs=1): err= 0: pid=86842: Sat Jul 13 00:25:13 2024 00:16:26.488 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:26.488 slat (nsec): min=13129, max=66650, avg=15639.54, stdev=4213.15 00:16:26.488 clat (usec): min=144, max=728, avg=228.49, stdev=30.70 00:16:26.488 lat (usec): min=158, max=742, avg=244.13, stdev=31.05 00:16:26.488 clat percentiles (usec): 00:16:26.488 | 1.00th=[ 169], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 206], 00:16:26.488 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 233], 00:16:26.488 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 281], 00:16:26.488 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 347], 99.95th=[ 465], 00:16:26.488 | 99.99th=[ 725] 00:16:26.488 write: IOPS=2232, BW=8931KiB/s (9145kB/s)(8940KiB/1001msec); 0 zone resets 00:16:26.488 slat (nsec): min=16255, max=99569, avg=23083.43, stdev=5782.38 00:16:26.488 clat (usec): min=112, max=2867, avg=197.49, stdev=67.06 00:16:26.488 lat (usec): min=131, max=2899, avg=220.57, stdev=68.19 00:16:26.488 clat percentiles (usec): 00:16:26.488 | 1.00th=[ 133], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 169], 00:16:26.488 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:16:26.488 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 241], 95.00th=[ 253], 00:16:26.488 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 506], 99.95th=[ 914], 00:16:26.488 | 99.99th=[ 2868] 00:16:26.488 bw ( KiB/s): min= 8544, max= 8544, per=28.93%, avg=8544.00, stdev= 0.00, samples=1 00:16:26.488 iops : min= 2136, max= 2136, avg=2136.00, stdev= 0.00, samples=1 00:16:26.488 lat (usec) : 250=87.14%, 500=12.77%, 750=0.05%, 1000=0.02% 00:16:26.488 lat (msec) : 4=0.02% 00:16:26.488 cpu : usr=1.70%, sys=5.80%, ctx=4283, majf=0, minf=5 00:16:26.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.488 issued rwts: total=2048,2235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:26.488 job2: (groupid=0, jobs=1): err= 0: pid=86843: Sat Jul 13 00:25:13 2024 00:16:26.488 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:26.488 slat (usec): min=12, max=121, avg=17.15, stdev= 4.91 00:16:26.488 clat (usec): min=148, max=4236, avg=240.25, stdev=137.57 00:16:26.488 lat (usec): min=166, max=4249, avg=257.40, stdev=137.78 00:16:26.488 clat percentiles (usec): 00:16:26.488 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 202], 00:16:26.488 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:16:26.488 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 306], 00:16:26.488 | 99.00th=[ 529], 99.50th=[ 635], 99.90th=[ 1844], 99.95th=[ 3720], 00:16:26.488 | 99.99th=[ 4228] 00:16:26.488 write: IOPS=2082, BW=8332KiB/s (8532kB/s)(8340KiB/1001msec); 0 zone resets 00:16:26.488 slat (nsec): min=15733, max=89040, avg=25146.59, stdev=6964.29 00:16:26.488 clat (usec): min=116, max=717, avg=197.89, stdev=44.24 00:16:26.488 lat (usec): min=134, max=740, avg=223.03, stdev=45.74 00:16:26.488 clat percentiles (usec): 00:16:26.488 | 1.00th=[ 131], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 167], 00:16:26.488 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 200], 00:16:26.488 | 70.00th=[ 210], 80.00th=[ 223], 90.00th=[ 243], 95.00th=[ 265], 00:16:26.488 | 99.00th=[ 400], 99.50th=[ 449], 99.90th=[ 545], 99.95th=[ 603], 00:16:26.488 | 99.99th=[ 717] 00:16:26.488 bw ( KiB/s): min= 9000, max= 9000, per=30.47%, avg=9000.00, stdev= 0.00, samples=1 00:16:26.488 iops : min= 2250, max= 2250, avg=2250.00, stdev= 0.00, samples=1 00:16:26.488 lat (usec) : 250=84.78%, 500=14.32%, 750=0.80% 00:16:26.488 lat (msec) : 2=0.05%, 4=0.02%, 10=0.02% 00:16:26.488 cpu : usr=0.70%, sys=7.50%, ctx=4133, majf=0, minf=15 00:16:26.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.488 issued rwts: total=2048,2085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:26.488 job3: (groupid=0, jobs=1): err= 0: pid=86844: Sat Jul 13 00:25:13 2024 00:16:26.488 read: IOPS=1108, BW=4436KiB/s (4542kB/s)(4440KiB/1001msec) 00:16:26.488 slat (nsec): min=13994, max=74405, avg=21938.85, stdev=7000.10 00:16:26.488 clat (usec): min=291, max=668, avg=402.52, stdev=50.83 00:16:26.488 lat (usec): min=308, max=704, avg=424.45, stdev=52.60 00:16:26.488 clat percentiles (usec): 00:16:26.488 | 1.00th=[ 322], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 363], 00:16:26.488 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 396], 60.00th=[ 408], 00:16:26.488 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 465], 95.00th=[ 494], 00:16:26.488 | 99.00th=[ 570], 99.50th=[ 635], 99.90th=[ 660], 99.95th=[ 668], 00:16:26.488 | 99.99th=[ 668] 00:16:26.488 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:26.488 slat (nsec): min=22143, max=97777, avg=40456.98, stdev=10355.22 00:16:26.488 clat (usec): min=157, max=3192, avg=299.03, stdev=98.19 00:16:26.488 lat (usec): min=209, max=3232, avg=339.48, stdev=97.68 00:16:26.488 clat percentiles (usec): 00:16:26.488 | 1.00th=[ 186], 5.00th=[ 215], 10.00th=[ 231], 20.00th=[ 249], 00:16:26.488 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 297], 00:16:26.488 | 70.00th=[ 322], 80.00th=[ 351], 90.00th=[ 388], 95.00th=[ 412], 00:16:26.488 | 99.00th=[ 457], 99.50th=[ 490], 99.90th=[ 1139], 99.95th=[ 3195], 00:16:26.488 | 99.99th=[ 3195] 00:16:26.488 bw ( KiB/s): min= 6904, max= 6904, per=23.37%, avg=6904.00, stdev= 0.00, samples=1 00:16:26.488 iops : min= 1726, max= 1726, avg=1726.00, stdev= 0.00, samples=1 00:16:26.488 lat (usec) : 250=12.77%, 500=85.30%, 750=1.85% 00:16:26.488 lat (msec) : 2=0.04%, 4=0.04% 00:16:26.488 cpu : usr=1.40%, sys=6.60%, ctx=2654, majf=0, minf=13 00:16:26.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.488 issued rwts: total=1110,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:26.488 00:16:26.488 Run status group 0 (all jobs): 00:16:26.488 READ: bw=24.7MiB/s (25.9MB/s), 4436KiB/s-8184KiB/s (4542kB/s-8380kB/s), io=24.7MiB (25.9MB), run=1001-1001msec 00:16:26.488 WRITE: bw=28.8MiB/s (30.2MB/s), 6138KiB/s-8931KiB/s (6285kB/s-9145kB/s), io=28.9MiB (30.3MB), run=1001-1001msec 00:16:26.488 00:16:26.488 Disk stats (read/write): 00:16:26.488 nvme0n1: ios=1073/1286, merge=0/0, ticks=434/414, in_queue=848, util=88.86% 00:16:26.488 nvme0n2: ios=1679/2048, merge=0/0, ticks=407/440, in_queue=847, util=88.64% 00:16:26.488 nvme0n3: ios=1658/2048, merge=0/0, ticks=395/433, in_queue=828, util=89.15% 00:16:26.488 nvme0n4: ios=1024/1269, merge=0/0, ticks=410/397, in_queue=807, util=89.83% 00:16:26.489 00:25:13 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:26.489 [global] 00:16:26.489 thread=1 00:16:26.489 invalidate=1 00:16:26.489 rw=write 00:16:26.489 time_based=1 00:16:26.489 runtime=1 00:16:26.489 ioengine=libaio 00:16:26.489 direct=1 00:16:26.489 bs=4096 00:16:26.489 iodepth=128 00:16:26.489 norandommap=0 00:16:26.489 numjobs=1 00:16:26.489 00:16:26.489 verify_dump=1 00:16:26.489 verify_backlog=512 00:16:26.489 verify_state_save=0 00:16:26.489 do_verify=1 00:16:26.489 verify=crc32c-intel 00:16:26.489 [job0] 00:16:26.489 filename=/dev/nvme0n1 00:16:26.489 [job1] 00:16:26.489 filename=/dev/nvme0n2 00:16:26.489 [job2] 00:16:26.489 filename=/dev/nvme0n3 00:16:26.489 [job3] 00:16:26.489 filename=/dev/nvme0n4 00:16:26.489 Could not set queue depth (nvme0n1) 00:16:26.489 Could not set queue depth (nvme0n2) 00:16:26.489 Could not set queue depth (nvme0n3) 00:16:26.489 Could not set queue depth (nvme0n4) 00:16:26.489 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:26.489 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:26.489 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:26.489 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:26.489 fio-3.35 00:16:26.489 Starting 4 threads 00:16:27.865 00:16:27.865 job0: (groupid=0, jobs=1): err= 0: pid=86904: Sat Jul 13 00:25:14 2024 00:16:27.865 read: IOPS=3673, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1002msec) 00:16:27.865 slat (usec): min=8, max=5545, avg=120.37, stdev=564.26 00:16:27.865 clat (usec): min=1391, max=20127, avg=15847.80, stdev=2057.70 00:16:27.865 lat (usec): min=1404, max=22510, avg=15968.17, stdev=2000.21 00:16:27.865 clat percentiles (usec): 00:16:27.865 | 1.00th=[ 5997], 5.00th=[12911], 10.00th=[13829], 20.00th=[14615], 00:16:27.865 | 30.00th=[15139], 40.00th=[15926], 50.00th=[16319], 60.00th=[16581], 00:16:27.865 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:16:27.865 | 99.00th=[18744], 99.50th=[18744], 99.90th=[20055], 99.95th=[20055], 00:16:27.865 | 99.99th=[20055] 00:16:27.865 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:16:27.865 slat (usec): min=11, max=4473, avg=127.72, stdev=564.89 00:16:27.865 clat (usec): min=12141, max=20126, avg=16596.56, stdev=1854.43 00:16:27.865 lat (usec): min=12159, max=20151, avg=16724.28, stdev=1842.33 00:16:27.866 clat percentiles (usec): 00:16:27.866 | 1.00th=[12911], 5.00th=[13698], 10.00th=[14353], 20.00th=[14877], 00:16:27.866 | 30.00th=[15401], 40.00th=[15664], 50.00th=[16188], 60.00th=[17171], 00:16:27.866 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19006], 95.00th=[19530], 00:16:27.866 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:16:27.866 | 99.99th=[20055] 00:16:27.866 bw ( KiB/s): min=16144, max=16384, per=29.54%, avg=16264.00, stdev=169.71, samples=2 00:16:27.866 iops : min= 4036, max= 4096, avg=4066.00, stdev=42.43, samples=2 00:16:27.866 lat (msec) : 2=0.17%, 4=0.05%, 10=0.77%, 20=98.38%, 50=0.63% 00:16:27.866 cpu : usr=4.40%, sys=11.49%, ctx=523, majf=0, minf=11 00:16:27.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:27.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:27.866 issued rwts: total=3681,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:27.866 job1: (groupid=0, jobs=1): err= 0: pid=86905: Sat Jul 13 00:25:14 2024 00:16:27.866 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:16:27.866 slat (usec): min=4, max=13739, avg=153.74, stdev=840.13 00:16:27.866 clat (usec): min=10878, max=48694, avg=20105.67, stdev=10390.86 00:16:27.866 lat (usec): min=10893, max=48708, avg=20259.41, stdev=10453.49 00:16:27.866 clat percentiles (usec): 00:16:27.866 | 1.00th=[11338], 5.00th=[12780], 10.00th=[14091], 20.00th=[14746], 00:16:27.866 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15926], 60.00th=[16450], 00:16:27.866 | 70.00th=[17171], 80.00th=[18220], 90.00th=[42730], 95.00th=[45876], 00:16:27.866 | 99.00th=[47449], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:16:27.866 | 99.99th=[48497] 00:16:27.866 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:16:27.866 slat (usec): min=11, max=11732, avg=139.36, stdev=648.39 00:16:27.866 clat (usec): min=1909, max=39718, avg=18134.00, stdev=5408.06 00:16:27.866 lat (usec): min=6724, max=44489, avg=18273.36, stdev=5435.39 00:16:27.866 clat percentiles (usec): 00:16:27.866 | 1.00th=[11207], 5.00th=[12387], 10.00th=[13304], 20.00th=[14484], 00:16:27.866 | 30.00th=[15926], 40.00th=[16450], 50.00th=[16909], 60.00th=[17695], 00:16:27.866 | 70.00th=[17957], 80.00th=[19006], 90.00th=[27395], 95.00th=[31851], 00:16:27.866 | 99.00th=[34866], 99.50th=[36439], 99.90th=[36963], 99.95th=[39584], 00:16:27.866 | 99.99th=[39584] 00:16:27.866 bw ( KiB/s): min=11248, max=16351, per=25.06%, avg=13799.50, stdev=3608.37, samples=2 00:16:27.866 iops : min= 2812, max= 4087, avg=3449.50, stdev=901.56, samples=2 00:16:27.866 lat (msec) : 2=0.02%, 10=0.36%, 20=83.68%, 50=15.95% 00:16:27.866 cpu : usr=3.59%, sys=9.77%, ctx=496, majf=0, minf=14 00:16:27.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:27.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:27.866 issued rwts: total=3072,3581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:27.866 job2: (groupid=0, jobs=1): err= 0: pid=86906: Sat Jul 13 00:25:14 2024 00:16:27.866 read: IOPS=3527, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1002msec) 00:16:27.866 slat (usec): min=7, max=5321, avg=132.77, stdev=613.28 00:16:27.866 clat (usec): min=452, max=22331, avg=17334.64, stdev=2119.59 00:16:27.866 lat (usec): min=1287, max=22356, avg=17467.41, stdev=2044.33 00:16:27.866 clat percentiles (usec): 00:16:27.866 | 1.00th=[ 5800], 5.00th=[14091], 10.00th=[15270], 20.00th=[16450], 00:16:27.866 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:16:27.866 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19268], 95.00th=[19530], 00:16:27.866 | 99.00th=[20317], 99.50th=[21365], 99.90th=[22414], 99.95th=[22414], 00:16:27.866 | 99.99th=[22414] 00:16:27.866 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:16:27.866 slat (usec): min=10, max=4825, avg=138.61, stdev=566.39 00:16:27.866 clat (usec): min=11874, max=23662, avg=18096.43, stdev=2188.81 00:16:27.866 lat (usec): min=11918, max=23683, avg=18235.04, stdev=2180.98 00:16:27.866 clat percentiles (usec): 00:16:27.866 | 1.00th=[13173], 5.00th=[14353], 10.00th=[15139], 20.00th=[16057], 00:16:27.866 | 30.00th=[16909], 40.00th=[17433], 50.00th=[18220], 60.00th=[19006], 00:16:27.866 | 70.00th=[19268], 80.00th=[20055], 90.00th=[20579], 95.00th=[21627], 00:16:27.866 | 99.00th=[22938], 99.50th=[23462], 99.90th=[23725], 99.95th=[23725], 00:16:27.866 | 99.99th=[23725] 00:16:27.866 bw ( KiB/s): min=14304, max=14368, per=26.04%, avg=14336.00, stdev=45.25, samples=2 00:16:27.866 iops : min= 3576, max= 3592, avg=3584.00, stdev=11.31, samples=2 00:16:27.866 lat (usec) : 500=0.01% 00:16:27.866 lat (msec) : 2=0.06%, 10=0.90%, 20=87.86%, 50=11.17% 00:16:27.866 cpu : usr=4.20%, sys=11.39%, ctx=528, majf=0, minf=3 00:16:27.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:27.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:27.866 issued rwts: total=3535,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:27.866 job3: (groupid=0, jobs=1): err= 0: pid=86907: Sat Jul 13 00:25:14 2024 00:16:27.866 read: IOPS=2417, BW=9671KiB/s (9903kB/s)(9700KiB/1003msec) 00:16:27.866 slat (usec): min=4, max=13268, avg=202.72, stdev=1003.71 00:16:27.866 clat (usec): min=404, max=49274, avg=25710.30, stdev=7828.24 00:16:27.866 lat (usec): min=9951, max=49287, avg=25913.02, stdev=7823.41 00:16:27.866 clat percentiles (usec): 00:16:27.866 | 1.00th=[10421], 5.00th=[18482], 10.00th=[19792], 20.00th=[21365], 00:16:27.866 | 30.00th=[21890], 40.00th=[22414], 50.00th=[22938], 60.00th=[23200], 00:16:27.866 | 70.00th=[24249], 80.00th=[30016], 90.00th=[39584], 95.00th=[45351], 00:16:27.866 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:16:27.866 | 99.99th=[49021] 00:16:27.866 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:16:27.866 slat (usec): min=11, max=5921, avg=189.46, stdev=747.71 00:16:27.866 clat (usec): min=17449, max=44514, avg=24943.22, stdev=4474.35 00:16:27.866 lat (usec): min=17474, max=44545, avg=25132.67, stdev=4487.14 00:16:27.866 clat percentiles (usec): 00:16:27.866 | 1.00th=[18482], 5.00th=[19792], 10.00th=[20317], 20.00th=[20841], 00:16:27.866 | 30.00th=[22152], 40.00th=[23200], 50.00th=[23987], 60.00th=[25035], 00:16:27.866 | 70.00th=[25822], 80.00th=[27395], 90.00th=[32637], 95.00th=[33817], 00:16:27.866 | 99.00th=[35914], 99.50th=[37487], 99.90th=[43779], 99.95th=[43779], 00:16:27.866 | 99.99th=[44303] 00:16:27.866 bw ( KiB/s): min= 8192, max=12288, per=18.60%, avg=10240.00, stdev=2896.31, samples=2 00:16:27.866 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:16:27.866 lat (usec) : 500=0.02% 00:16:27.866 lat (msec) : 10=0.06%, 20=8.97%, 50=90.95% 00:16:27.866 cpu : usr=3.09%, sys=8.78%, ctx=455, majf=0, minf=17 00:16:27.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:27.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:27.866 issued rwts: total=2425,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:27.866 00:16:27.866 Run status group 0 (all jobs): 00:16:27.866 READ: bw=49.5MiB/s (51.9MB/s), 9671KiB/s-14.3MiB/s (9903kB/s-15.0MB/s), io=49.7MiB (52.1MB), run=1002-1004msec 00:16:27.866 WRITE: bw=53.8MiB/s (56.4MB/s), 9.97MiB/s-16.0MiB/s (10.5MB/s-16.7MB/s), io=54.0MiB (56.6MB), run=1002-1004msec 00:16:27.866 00:16:27.866 Disk stats (read/write): 00:16:27.866 nvme0n1: ios=3122/3511, merge=0/0, ticks=11825/13098, in_queue=24923, util=87.56% 00:16:27.866 nvme0n2: ios=3046/3072, merge=0/0, ticks=16925/14727, in_queue=31652, util=87.82% 00:16:27.866 nvme0n3: ios=2917/3072, merge=0/0, ticks=12487/12835, in_queue=25322, util=89.00% 00:16:27.866 nvme0n4: ios=2048/2470, merge=0/0, ticks=11553/13995, in_queue=25548, util=89.56% 00:16:27.866 00:25:14 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:27.866 [global] 00:16:27.866 thread=1 00:16:27.866 invalidate=1 00:16:27.866 rw=randwrite 00:16:27.866 time_based=1 00:16:27.866 runtime=1 00:16:27.866 ioengine=libaio 00:16:27.866 direct=1 00:16:27.866 bs=4096 00:16:27.866 iodepth=128 00:16:27.866 norandommap=0 00:16:27.866 numjobs=1 00:16:27.866 00:16:27.866 verify_dump=1 00:16:27.866 verify_backlog=512 00:16:27.866 verify_state_save=0 00:16:27.866 do_verify=1 00:16:27.866 verify=crc32c-intel 00:16:27.866 [job0] 00:16:27.866 filename=/dev/nvme0n1 00:16:27.866 [job1] 00:16:27.866 filename=/dev/nvme0n2 00:16:27.866 [job2] 00:16:27.866 filename=/dev/nvme0n3 00:16:27.866 [job3] 00:16:27.866 filename=/dev/nvme0n4 00:16:27.866 Could not set queue depth (nvme0n1) 00:16:27.866 Could not set queue depth (nvme0n2) 00:16:27.866 Could not set queue depth (nvme0n3) 00:16:27.866 Could not set queue depth (nvme0n4) 00:16:27.866 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.866 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.866 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.866 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:27.866 fio-3.35 00:16:27.866 Starting 4 threads 00:16:29.243 00:16:29.243 job0: (groupid=0, jobs=1): err= 0: pid=86961: Sat Jul 13 00:25:16 2024 00:16:29.243 read: IOPS=3685, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1007msec) 00:16:29.243 slat (usec): min=5, max=14443, avg=128.94, stdev=884.10 00:16:29.243 clat (usec): min=3982, max=31603, avg=17002.65, stdev=3913.43 00:16:29.243 lat (usec): min=6389, max=31671, avg=17131.59, stdev=3952.62 00:16:29.243 clat percentiles (usec): 00:16:29.243 | 1.00th=[ 8979], 5.00th=[12125], 10.00th=[12649], 20.00th=[13960], 00:16:29.243 | 30.00th=[15008], 40.00th=[15533], 50.00th=[16450], 60.00th=[17695], 00:16:29.243 | 70.00th=[18482], 80.00th=[19792], 90.00th=[21890], 95.00th=[24511], 00:16:29.243 | 99.00th=[29754], 99.50th=[30540], 99.90th=[31589], 99.95th=[31589], 00:16:29.243 | 99.99th=[31589] 00:16:29.243 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:16:29.243 slat (usec): min=5, max=14869, avg=120.05, stdev=881.13 00:16:29.243 clat (usec): min=5607, max=31557, avg=15733.74, stdev=2792.07 00:16:29.243 lat (usec): min=5633, max=31569, avg=15853.79, stdev=2924.32 00:16:29.243 clat percentiles (usec): 00:16:29.243 | 1.00th=[ 6652], 5.00th=[ 9634], 10.00th=[12256], 20.00th=[14877], 00:16:29.243 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16188], 60.00th=[16450], 00:16:29.243 | 70.00th=[16909], 80.00th=[17695], 90.00th=[18220], 95.00th=[19006], 00:16:29.243 | 99.00th=[19792], 99.50th=[26346], 99.90th=[30016], 99.95th=[31589], 00:16:29.243 | 99.99th=[31589] 00:16:29.243 bw ( KiB/s): min=16376, max=16384, per=27.97%, avg=16380.00, stdev= 5.66, samples=2 00:16:29.243 iops : min= 4094, max= 4096, avg=4095.00, stdev= 1.41, samples=2 00:16:29.243 lat (msec) : 4=0.01%, 10=3.82%, 20=87.72%, 50=8.45% 00:16:29.243 cpu : usr=3.78%, sys=10.83%, ctx=362, majf=0, minf=3 00:16:29.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:29.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:29.243 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.243 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:29.243 job1: (groupid=0, jobs=1): err= 0: pid=86962: Sat Jul 13 00:25:16 2024 00:16:29.243 read: IOPS=3698, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1003msec) 00:16:29.243 slat (usec): min=3, max=14939, avg=136.06, stdev=882.75 00:16:29.243 clat (usec): min=933, max=32608, avg=16979.21, stdev=4326.84 00:16:29.243 lat (usec): min=5356, max=32624, avg=17115.27, stdev=4359.95 00:16:29.243 clat percentiles (usec): 00:16:29.243 | 1.00th=[ 5866], 5.00th=[12125], 10.00th=[12780], 20.00th=[13960], 00:16:29.243 | 30.00th=[14877], 40.00th=[15401], 50.00th=[15664], 60.00th=[16319], 00:16:29.243 | 70.00th=[17695], 80.00th=[20055], 90.00th=[23725], 95.00th=[26346], 00:16:29.243 | 99.00th=[30016], 99.50th=[30540], 99.90th=[32637], 99.95th=[32637], 00:16:29.243 | 99.99th=[32637] 00:16:29.243 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:16:29.243 slat (usec): min=5, max=13664, avg=113.39, stdev=701.74 00:16:29.243 clat (usec): min=3304, max=32538, avg=15626.05, stdev=3337.67 00:16:29.243 lat (usec): min=3334, max=32554, avg=15739.44, stdev=3412.95 00:16:29.243 clat percentiles (usec): 00:16:29.243 | 1.00th=[ 5342], 5.00th=[ 7832], 10.00th=[10683], 20.00th=[14222], 00:16:29.243 | 30.00th=[15401], 40.00th=[15926], 50.00th=[16581], 60.00th=[16909], 00:16:29.243 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18482], 95.00th=[18744], 00:16:29.243 | 99.00th=[19530], 99.50th=[24511], 99.90th=[30278], 99.95th=[31065], 00:16:29.243 | 99.99th=[32637] 00:16:29.243 bw ( KiB/s): min=16368, max=16384, per=27.96%, avg=16376.00, stdev=11.31, samples=2 00:16:29.243 iops : min= 4092, max= 4096, avg=4094.00, stdev= 2.83, samples=2 00:16:29.243 lat (usec) : 1000=0.01% 00:16:29.243 lat (msec) : 4=0.04%, 10=5.12%, 20=85.08%, 50=9.75% 00:16:29.243 cpu : usr=3.79%, sys=9.78%, ctx=486, majf=0, minf=5 00:16:29.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:29.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:29.243 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.243 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:29.243 job2: (groupid=0, jobs=1): err= 0: pid=86963: Sat Jul 13 00:25:16 2024 00:16:29.243 read: IOPS=2617, BW=10.2MiB/s (10.7MB/s)(10.4MiB/1014msec) 00:16:29.243 slat (usec): min=6, max=20219, avg=188.45, stdev=1236.53 00:16:29.243 clat (usec): min=7367, max=45437, avg=23214.65, stdev=6533.50 00:16:29.243 lat (usec): min=7381, max=45458, avg=23403.10, stdev=6594.90 00:16:29.243 clat percentiles (usec): 00:16:29.243 | 1.00th=[ 8586], 5.00th=[17171], 10.00th=[17695], 20.00th=[18482], 00:16:29.243 | 30.00th=[19530], 40.00th=[20579], 50.00th=[21365], 60.00th=[21890], 00:16:29.243 | 70.00th=[25035], 80.00th=[26870], 90.00th=[32637], 95.00th=[38011], 00:16:29.243 | 99.00th=[42730], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:16:29.243 | 99.99th=[45351] 00:16:29.243 write: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec); 0 zone resets 00:16:29.243 slat (usec): min=5, max=18618, avg=154.64, stdev=746.11 00:16:29.243 clat (usec): min=3596, max=45368, avg=21764.19, stdev=5036.53 00:16:29.243 lat (usec): min=3622, max=45380, avg=21918.83, stdev=5099.97 00:16:29.243 clat percentiles (usec): 00:16:29.243 | 1.00th=[ 7046], 5.00th=[ 9765], 10.00th=[12780], 20.00th=[19530], 00:16:29.243 | 30.00th=[21365], 40.00th=[22938], 50.00th=[23725], 60.00th=[23987], 00:16:29.244 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25297], 95.00th=[25560], 00:16:29.244 | 99.00th=[32637], 99.50th=[35914], 99.90th=[42730], 99.95th=[43254], 00:16:29.244 | 99.99th=[45351] 00:16:29.244 bw ( KiB/s): min=12024, max=12288, per=20.75%, avg=12156.00, stdev=186.68, samples=2 00:16:29.244 iops : min= 3006, max= 3072, avg=3039.00, stdev=46.67, samples=2 00:16:29.244 lat (msec) : 4=0.09%, 10=4.02%, 20=24.29%, 50=71.60% 00:16:29.244 cpu : usr=2.76%, sys=8.39%, ctx=435, majf=0, minf=7 00:16:29.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:29.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:29.244 issued rwts: total=2654,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.244 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:29.244 job3: (groupid=0, jobs=1): err= 0: pid=86964: Sat Jul 13 00:25:16 2024 00:16:29.244 read: IOPS=3466, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1006msec) 00:16:29.244 slat (usec): min=6, max=17805, avg=143.12, stdev=985.59 00:16:29.244 clat (usec): min=3500, max=35741, avg=18596.34, stdev=4386.88 00:16:29.244 lat (usec): min=5888, max=35758, avg=18739.45, stdev=4433.46 00:16:29.244 clat percentiles (usec): 00:16:29.244 | 1.00th=[10159], 5.00th=[12649], 10.00th=[14484], 20.00th=[15926], 00:16:29.244 | 30.00th=[16581], 40.00th=[17171], 50.00th=[17695], 60.00th=[18482], 00:16:29.244 | 70.00th=[19792], 80.00th=[21365], 90.00th=[24249], 95.00th=[27395], 00:16:29.244 | 99.00th=[33817], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:16:29.244 | 99.99th=[35914] 00:16:29.244 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:16:29.244 slat (usec): min=5, max=15724, avg=131.34, stdev=952.60 00:16:29.244 clat (usec): min=5109, max=35679, avg=17449.28, stdev=3272.95 00:16:29.244 lat (usec): min=5134, max=35690, avg=17580.63, stdev=3408.73 00:16:29.244 clat percentiles (usec): 00:16:29.244 | 1.00th=[ 5735], 5.00th=[10028], 10.00th=[14091], 20.00th=[15401], 00:16:29.244 | 30.00th=[16909], 40.00th=[17957], 50.00th=[18220], 60.00th=[18744], 00:16:29.244 | 70.00th=[19268], 80.00th=[19530], 90.00th=[20055], 95.00th=[20579], 00:16:29.244 | 99.00th=[21365], 99.50th=[28967], 99.90th=[34866], 99.95th=[35390], 00:16:29.244 | 99.99th=[35914] 00:16:29.244 bw ( KiB/s): min=13600, max=15072, per=24.48%, avg=14336.00, stdev=1040.86, samples=2 00:16:29.244 iops : min= 3400, max= 3768, avg=3584.00, stdev=260.22, samples=2 00:16:29.244 lat (msec) : 4=0.01%, 10=2.93%, 20=77.32%, 50=19.74% 00:16:29.244 cpu : usr=3.78%, sys=9.75%, ctx=338, majf=0, minf=6 00:16:29.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:29.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:29.244 issued rwts: total=3487,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.244 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:29.244 00:16:29.244 Run status group 0 (all jobs): 00:16:29.244 READ: bw=52.2MiB/s (54.8MB/s), 10.2MiB/s-14.4MiB/s (10.7MB/s-15.1MB/s), io=53.0MiB (55.5MB), run=1003-1014msec 00:16:29.244 WRITE: bw=57.2MiB/s (60.0MB/s), 11.8MiB/s-16.0MiB/s (12.4MB/s-16.7MB/s), io=58.0MiB (60.8MB), run=1003-1014msec 00:16:29.244 00:16:29.244 Disk stats (read/write): 00:16:29.244 nvme0n1: ios=3122/3503, merge=0/0, ticks=49062/52429, in_queue=101491, util=87.58% 00:16:29.244 nvme0n2: ios=3107/3582, merge=0/0, ticks=48627/54063, in_queue=102690, util=87.72% 00:16:29.244 nvme0n3: ios=2174/2560, merge=0/0, ticks=49627/53863, in_queue=103490, util=89.20% 00:16:29.244 nvme0n4: ios=2873/3072, merge=0/0, ticks=51009/50796, in_queue=101805, util=89.66% 00:16:29.244 00:25:16 -- target/fio.sh@55 -- # sync 00:16:29.244 00:25:16 -- target/fio.sh@59 -- # fio_pid=86982 00:16:29.244 00:25:16 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:29.244 00:25:16 -- target/fio.sh@61 -- # sleep 3 00:16:29.244 [global] 00:16:29.244 thread=1 00:16:29.244 invalidate=1 00:16:29.244 rw=read 00:16:29.244 time_based=1 00:16:29.244 runtime=10 00:16:29.244 ioengine=libaio 00:16:29.244 direct=1 00:16:29.244 bs=4096 00:16:29.244 iodepth=1 00:16:29.244 norandommap=1 00:16:29.244 numjobs=1 00:16:29.244 00:16:29.244 [job0] 00:16:29.244 filename=/dev/nvme0n1 00:16:29.244 [job1] 00:16:29.244 filename=/dev/nvme0n2 00:16:29.244 [job2] 00:16:29.244 filename=/dev/nvme0n3 00:16:29.244 [job3] 00:16:29.244 filename=/dev/nvme0n4 00:16:29.244 Could not set queue depth (nvme0n1) 00:16:29.244 Could not set queue depth (nvme0n2) 00:16:29.244 Could not set queue depth (nvme0n3) 00:16:29.244 Could not set queue depth (nvme0n4) 00:16:29.244 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.244 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.244 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.244 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.244 fio-3.35 00:16:29.244 Starting 4 threads 00:16:32.534 00:25:19 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:32.534 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=28340224, buflen=4096 00:16:32.534 fio: pid=87025, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:32.534 00:25:19 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:32.534 fio: pid=87024, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:32.534 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=34721792, buflen=4096 00:16:32.793 00:25:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:32.793 00:25:19 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:32.793 fio: pid=87022, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:32.793 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=38187008, buflen=4096 00:16:32.793 00:25:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:32.793 00:25:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:33.052 fio: pid=87023, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:33.052 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=41050112, buflen=4096 00:16:33.052 00:16:33.052 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=87022: Sat Jul 13 00:25:20 2024 00:16:33.052 read: IOPS=2739, BW=10.7MiB/s (11.2MB/s)(36.4MiB/3403msec) 00:16:33.052 slat (usec): min=8, max=12767, avg=19.27, stdev=208.25 00:16:33.052 clat (usec): min=3, max=7115, avg=344.04, stdev=128.40 00:16:33.052 lat (usec): min=138, max=13233, avg=363.31, stdev=243.88 00:16:33.052 clat percentiles (usec): 00:16:33.052 | 1.00th=[ 163], 5.00th=[ 184], 10.00th=[ 198], 20.00th=[ 239], 00:16:33.052 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 375], 00:16:33.052 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 449], 00:16:33.052 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 1516], 99.95th=[ 2704], 00:16:33.052 | 99.99th=[ 7111] 00:16:33.052 bw ( KiB/s): min= 9792, max=13264, per=27.99%, avg=10655.17, stdev=1290.60, samples=6 00:16:33.052 iops : min= 2448, max= 3316, avg=2663.67, stdev=322.71, samples=6 00:16:33.052 lat (usec) : 4=0.02%, 100=0.01%, 250=20.73%, 500=77.89%, 750=1.21% 00:16:33.052 lat (usec) : 1000=0.01% 00:16:33.052 lat (msec) : 2=0.06%, 4=0.03%, 10=0.02% 00:16:33.052 cpu : usr=0.88%, sys=3.41%, ctx=9354, majf=0, minf=1 00:16:33.052 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.052 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.052 issued rwts: total=9324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.052 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.052 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=87023: Sat Jul 13 00:25:20 2024 00:16:33.052 read: IOPS=2745, BW=10.7MiB/s (11.2MB/s)(39.1MiB/3651msec) 00:16:33.052 slat (usec): min=13, max=10662, avg=25.69, stdev=211.84 00:16:33.052 clat (usec): min=126, max=3778, avg=336.61, stdev=115.96 00:16:33.052 lat (usec): min=142, max=10840, avg=362.31, stdev=239.31 00:16:33.052 clat percentiles (usec): 00:16:33.052 | 1.00th=[ 139], 5.00th=[ 151], 10.00th=[ 165], 20.00th=[ 202], 00:16:33.052 | 30.00th=[ 243], 40.00th=[ 367], 50.00th=[ 379], 60.00th=[ 388], 00:16:33.052 | 70.00th=[ 404], 80.00th=[ 420], 90.00th=[ 445], 95.00th=[ 461], 00:16:33.052 | 99.00th=[ 515], 99.50th=[ 578], 99.90th=[ 676], 99.95th=[ 1020], 00:16:33.052 | 99.99th=[ 2376] 00:16:33.052 bw ( KiB/s): min= 9120, max=17898, per=28.05%, avg=10678.57, stdev=3190.18, samples=7 00:16:33.052 iops : min= 2280, max= 4474, avg=2669.57, stdev=797.36, samples=7 00:16:33.052 lat (usec) : 250=30.52%, 500=68.11%, 750=1.28%, 1000=0.01% 00:16:33.052 lat (msec) : 2=0.05%, 4=0.02% 00:16:33.052 cpu : usr=0.88%, sys=4.63%, ctx=10040, majf=0, minf=1 00:16:33.052 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.052 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.052 issued rwts: total=10023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.052 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.052 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=87024: Sat Jul 13 00:25:20 2024 00:16:33.052 read: IOPS=2660, BW=10.4MiB/s (10.9MB/s)(33.1MiB/3187msec) 00:16:33.052 slat (usec): min=8, max=8391, avg=17.18, stdev=124.00 00:16:33.052 clat (usec): min=100, max=2929, avg=357.31, stdev=84.59 00:16:33.052 lat (usec): min=174, max=8676, avg=374.49, stdev=148.47 00:16:33.052 clat percentiles (usec): 00:16:33.052 | 1.00th=[ 180], 5.00th=[ 198], 10.00th=[ 227], 20.00th=[ 330], 00:16:33.052 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:16:33.052 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 429], 95.00th=[ 453], 00:16:33.052 | 99.00th=[ 506], 99.50th=[ 545], 99.90th=[ 742], 99.95th=[ 1106], 00:16:33.052 | 99.99th=[ 2933] 00:16:33.052 bw ( KiB/s): min= 9792, max=13256, per=28.05%, avg=10676.50, stdev=1278.38, samples=6 00:16:33.052 iops : min= 2448, max= 3314, avg=2669.00, stdev=319.64, samples=6 00:16:33.052 lat (usec) : 250=12.22%, 500=86.62%, 750=1.05%, 1000=0.02% 00:16:33.052 lat (msec) : 2=0.04%, 4=0.04% 00:16:33.052 cpu : usr=0.75%, sys=3.45%, ctx=8499, majf=0, minf=1 00:16:33.052 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.052 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.052 issued rwts: total=8478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.052 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.053 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=87025: Sat Jul 13 00:25:20 2024 00:16:33.053 read: IOPS=2349, BW=9398KiB/s (9623kB/s)(27.0MiB/2945msec) 00:16:33.053 slat (usec): min=15, max=966, avg=31.96, stdev=15.61 00:16:33.053 clat (usec): min=186, max=2906, avg=389.61, stdev=54.55 00:16:33.053 lat (usec): min=231, max=2927, avg=421.58, stdev=55.06 00:16:33.053 clat percentiles (usec): 00:16:33.053 | 1.00th=[ 277], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 359], 00:16:33.053 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 396], 00:16:33.053 | 70.00th=[ 408], 80.00th=[ 424], 90.00th=[ 441], 95.00th=[ 457], 00:16:33.053 | 99.00th=[ 494], 99.50th=[ 515], 99.90th=[ 611], 99.95th=[ 865], 00:16:33.053 | 99.99th=[ 2900] 00:16:33.053 bw ( KiB/s): min= 9224, max= 9608, per=24.76%, avg=9425.60, stdev=163.70, samples=5 00:16:33.053 iops : min= 2306, max= 2402, avg=2356.40, stdev=40.92, samples=5 00:16:33.053 lat (usec) : 250=0.42%, 500=98.76%, 750=0.74%, 1000=0.03% 00:16:33.053 lat (msec) : 2=0.03%, 4=0.01% 00:16:33.053 cpu : usr=1.80%, sys=5.74%, ctx=6924, majf=0, minf=1 00:16:33.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.053 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.053 issued rwts: total=6920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.053 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.053 00:16:33.053 Run status group 0 (all jobs): 00:16:33.053 READ: bw=37.2MiB/s (39.0MB/s), 9398KiB/s-10.7MiB/s (9623kB/s-11.2MB/s), io=136MiB (142MB), run=2945-3651msec 00:16:33.053 00:16:33.053 Disk stats (read/write): 00:16:33.053 nvme0n1: ios=9181/0, merge=0/0, ticks=3207/0, in_queue=3207, util=95.19% 00:16:33.053 nvme0n2: ios=9843/0, merge=0/0, ticks=3422/0, in_queue=3422, util=95.63% 00:16:33.053 nvme0n3: ios=8295/0, merge=0/0, ticks=2994/0, in_queue=2994, util=96.49% 00:16:33.053 nvme0n4: ios=6756/0, merge=0/0, ticks=2734/0, in_queue=2734, util=96.83% 00:16:33.053 00:25:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:33.053 00:25:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:33.312 00:25:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:33.312 00:25:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:33.571 00:25:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:33.571 00:25:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:34.138 00:25:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:34.138 00:25:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:34.397 00:25:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:34.398 00:25:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:34.656 00:25:21 -- target/fio.sh@69 -- # fio_status=0 00:16:34.656 00:25:21 -- target/fio.sh@70 -- # wait 86982 00:16:34.656 00:25:21 -- target/fio.sh@70 -- # fio_status=4 00:16:34.656 00:25:21 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:34.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.656 00:25:21 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:34.656 00:25:21 -- common/autotest_common.sh@1198 -- # local i=0 00:16:34.656 00:25:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:34.656 00:25:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.656 00:25:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:34.656 00:25:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.657 nvmf hotplug test: fio failed as expected 00:16:34.657 00:25:21 -- common/autotest_common.sh@1210 -- # return 0 00:16:34.657 00:25:21 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:34.657 00:25:21 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:34.657 00:25:21 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.915 00:25:22 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:34.915 00:25:22 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:34.915 00:25:22 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:34.915 00:25:22 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:34.915 00:25:22 -- target/fio.sh@91 -- # nvmftestfini 00:16:34.915 00:25:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:34.915 00:25:22 -- nvmf/common.sh@116 -- # sync 00:16:34.915 00:25:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:34.915 00:25:22 -- nvmf/common.sh@119 -- # set +e 00:16:34.915 00:25:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:34.915 00:25:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:34.915 rmmod nvme_tcp 00:16:34.915 rmmod nvme_fabrics 00:16:34.915 rmmod nvme_keyring 00:16:34.915 00:25:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:34.915 00:25:22 -- nvmf/common.sh@123 -- # set -e 00:16:34.915 00:25:22 -- nvmf/common.sh@124 -- # return 0 00:16:34.915 00:25:22 -- nvmf/common.sh@477 -- # '[' -n 86491 ']' 00:16:34.915 00:25:22 -- nvmf/common.sh@478 -- # killprocess 86491 00:16:34.915 00:25:22 -- common/autotest_common.sh@926 -- # '[' -z 86491 ']' 00:16:34.915 00:25:22 -- common/autotest_common.sh@930 -- # kill -0 86491 00:16:34.915 00:25:22 -- common/autotest_common.sh@931 -- # uname 00:16:34.915 00:25:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:34.915 00:25:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86491 00:16:34.915 killing process with pid 86491 00:16:34.915 00:25:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:34.915 00:25:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:34.915 00:25:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86491' 00:16:34.915 00:25:22 -- common/autotest_common.sh@945 -- # kill 86491 00:16:34.915 00:25:22 -- common/autotest_common.sh@950 -- # wait 86491 00:16:35.174 00:25:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:35.174 00:25:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:35.174 00:25:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:35.174 00:25:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.174 00:25:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:35.174 00:25:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.174 00:25:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.174 00:25:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.174 00:25:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:35.174 00:16:35.174 real 0m19.403s 00:16:35.174 user 1m15.272s 00:16:35.174 sys 0m8.055s 00:16:35.174 00:25:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:35.174 ************************************ 00:16:35.174 END TEST nvmf_fio_target 00:16:35.174 ************************************ 00:16:35.174 00:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:35.463 00:25:22 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:35.463 00:25:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:35.463 00:25:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:35.463 00:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:35.463 ************************************ 00:16:35.463 START TEST nvmf_bdevio 00:16:35.463 ************************************ 00:16:35.463 00:25:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:35.463 * Looking for test storage... 00:16:35.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:35.463 00:25:22 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.463 00:25:22 -- nvmf/common.sh@7 -- # uname -s 00:16:35.463 00:25:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.463 00:25:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.463 00:25:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.463 00:25:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.463 00:25:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.463 00:25:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.463 00:25:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.463 00:25:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.463 00:25:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.463 00:25:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.463 00:25:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:16:35.463 00:25:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:16:35.463 00:25:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.463 00:25:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.463 00:25:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.463 00:25:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.463 00:25:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.463 00:25:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.463 00:25:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.463 00:25:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.463 00:25:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.463 00:25:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.463 00:25:22 -- paths/export.sh@5 -- # export PATH 00:16:35.463 00:25:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.463 00:25:22 -- nvmf/common.sh@46 -- # : 0 00:16:35.463 00:25:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:35.463 00:25:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:35.463 00:25:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:35.463 00:25:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.463 00:25:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.463 00:25:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:35.463 00:25:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:35.463 00:25:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:35.463 00:25:22 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.463 00:25:22 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.463 00:25:22 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:35.463 00:25:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:35.463 00:25:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.463 00:25:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:35.463 00:25:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:35.463 00:25:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:35.463 00:25:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.463 00:25:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.463 00:25:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.463 00:25:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:35.463 00:25:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:35.463 00:25:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:35.463 00:25:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:35.463 00:25:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:35.463 00:25:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:35.463 00:25:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.463 00:25:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.463 00:25:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:35.463 00:25:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:35.463 00:25:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.463 00:25:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.463 00:25:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.463 00:25:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.463 00:25:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.463 00:25:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.463 00:25:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.463 00:25:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.463 00:25:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:35.463 00:25:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:35.463 Cannot find device "nvmf_tgt_br" 00:16:35.463 00:25:22 -- nvmf/common.sh@154 -- # true 00:16:35.463 00:25:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.463 Cannot find device "nvmf_tgt_br2" 00:16:35.463 00:25:22 -- nvmf/common.sh@155 -- # true 00:16:35.463 00:25:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:35.463 00:25:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:35.463 Cannot find device "nvmf_tgt_br" 00:16:35.463 00:25:22 -- nvmf/common.sh@157 -- # true 00:16:35.463 00:25:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:35.463 Cannot find device "nvmf_tgt_br2" 00:16:35.463 00:25:22 -- nvmf/common.sh@158 -- # true 00:16:35.463 00:25:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:35.463 00:25:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:35.463 00:25:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.463 00:25:22 -- nvmf/common.sh@161 -- # true 00:16:35.463 00:25:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.463 00:25:22 -- nvmf/common.sh@162 -- # true 00:16:35.463 00:25:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.463 00:25:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.463 00:25:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.744 00:25:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.744 00:25:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.744 00:25:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.744 00:25:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.744 00:25:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.744 00:25:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.744 00:25:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:35.744 00:25:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:35.744 00:25:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:35.744 00:25:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:35.744 00:25:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.744 00:25:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.744 00:25:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.744 00:25:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:35.744 00:25:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:35.744 00:25:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.744 00:25:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.744 00:25:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.744 00:25:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.744 00:25:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.744 00:25:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:35.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:16:35.744 00:16:35.744 --- 10.0.0.2 ping statistics --- 00:16:35.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.744 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:35.744 00:25:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:35.744 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.744 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:35.744 00:16:35.744 --- 10.0.0.3 ping statistics --- 00:16:35.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.744 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:35.744 00:25:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:16:35.744 00:16:35.744 --- 10.0.0.1 ping statistics --- 00:16:35.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.744 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:35.744 00:25:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.744 00:25:22 -- nvmf/common.sh@421 -- # return 0 00:16:35.744 00:25:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:35.744 00:25:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.744 00:25:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:35.744 00:25:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:35.744 00:25:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.744 00:25:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:35.744 00:25:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:35.744 00:25:22 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:35.744 00:25:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:35.744 00:25:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:35.744 00:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:35.744 00:25:22 -- nvmf/common.sh@469 -- # nvmfpid=87355 00:16:35.744 00:25:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:35.744 00:25:22 -- nvmf/common.sh@470 -- # waitforlisten 87355 00:16:35.744 00:25:22 -- common/autotest_common.sh@819 -- # '[' -z 87355 ']' 00:16:35.744 00:25:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.744 00:25:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:35.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.744 00:25:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.744 00:25:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:35.744 00:25:22 -- common/autotest_common.sh@10 -- # set +x 00:16:35.744 [2024-07-13 00:25:22.949216] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:35.744 [2024-07-13 00:25:22.949328] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.003 [2024-07-13 00:25:23.094340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.003 [2024-07-13 00:25:23.208682] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:36.003 [2024-07-13 00:25:23.208880] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.003 [2024-07-13 00:25:23.208907] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.003 [2024-07-13 00:25:23.208919] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.003 [2024-07-13 00:25:23.209104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:36.003 [2024-07-13 00:25:23.209230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:36.003 [2024-07-13 00:25:23.209353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:36.003 [2024-07-13 00:25:23.209360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.937 00:25:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:36.937 00:25:23 -- common/autotest_common.sh@852 -- # return 0 00:16:36.937 00:25:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:36.937 00:25:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:36.937 00:25:23 -- common/autotest_common.sh@10 -- # set +x 00:16:36.937 00:25:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.938 00:25:23 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:36.938 00:25:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:36.938 00:25:23 -- common/autotest_common.sh@10 -- # set +x 00:16:36.938 [2024-07-13 00:25:24.000582] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.938 00:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:36.938 00:25:24 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:36.938 00:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:36.938 00:25:24 -- common/autotest_common.sh@10 -- # set +x 00:16:36.938 Malloc0 00:16:36.938 00:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:36.938 00:25:24 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:36.938 00:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:36.938 00:25:24 -- common/autotest_common.sh@10 -- # set +x 00:16:36.938 00:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:36.938 00:25:24 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:36.938 00:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:36.938 00:25:24 -- common/autotest_common.sh@10 -- # set +x 00:16:36.938 00:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:36.938 00:25:24 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.938 00:25:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:36.938 00:25:24 -- common/autotest_common.sh@10 -- # set +x 00:16:36.938 [2024-07-13 00:25:24.069827] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.938 00:25:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:36.938 00:25:24 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:36.938 00:25:24 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:36.938 00:25:24 -- nvmf/common.sh@520 -- # config=() 00:16:36.938 00:25:24 -- nvmf/common.sh@520 -- # local subsystem config 00:16:36.938 00:25:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:36.938 00:25:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:36.938 { 00:16:36.938 "params": { 00:16:36.938 "name": "Nvme$subsystem", 00:16:36.938 "trtype": "$TEST_TRANSPORT", 00:16:36.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.938 "adrfam": "ipv4", 00:16:36.938 "trsvcid": "$NVMF_PORT", 00:16:36.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.938 "hdgst": ${hdgst:-false}, 00:16:36.938 "ddgst": ${ddgst:-false} 00:16:36.938 }, 00:16:36.938 "method": "bdev_nvme_attach_controller" 00:16:36.938 } 00:16:36.938 EOF 00:16:36.938 )") 00:16:36.938 00:25:24 -- nvmf/common.sh@542 -- # cat 00:16:36.938 00:25:24 -- nvmf/common.sh@544 -- # jq . 00:16:36.938 00:25:24 -- nvmf/common.sh@545 -- # IFS=, 00:16:36.938 00:25:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:36.938 "params": { 00:16:36.938 "name": "Nvme1", 00:16:36.938 "trtype": "tcp", 00:16:36.938 "traddr": "10.0.0.2", 00:16:36.938 "adrfam": "ipv4", 00:16:36.938 "trsvcid": "4420", 00:16:36.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.938 "hdgst": false, 00:16:36.938 "ddgst": false 00:16:36.938 }, 00:16:36.938 "method": "bdev_nvme_attach_controller" 00:16:36.938 }' 00:16:36.938 [2024-07-13 00:25:24.128971] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:36.938 [2024-07-13 00:25:24.129642] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87409 ] 00:16:37.197 [2024-07-13 00:25:24.273574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:37.197 [2024-07-13 00:25:24.366359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.197 [2024-07-13 00:25:24.366518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.197 [2024-07-13 00:25:24.366854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.456 [2024-07-13 00:25:24.538575] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:37.456 [2024-07-13 00:25:24.538900] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:37.456 I/O targets: 00:16:37.456 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:37.456 00:16:37.456 00:16:37.456 CUnit - A unit testing framework for C - Version 2.1-3 00:16:37.456 http://cunit.sourceforge.net/ 00:16:37.456 00:16:37.456 00:16:37.456 Suite: bdevio tests on: Nvme1n1 00:16:37.456 Test: blockdev write read block ...passed 00:16:37.456 Test: blockdev write zeroes read block ...passed 00:16:37.456 Test: blockdev write zeroes read no split ...passed 00:16:37.456 Test: blockdev write zeroes read split ...passed 00:16:37.456 Test: blockdev write zeroes read split partial ...passed 00:16:37.456 Test: blockdev reset ...[2024-07-13 00:25:24.655486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:37.456 [2024-07-13 00:25:24.655757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce84e0 (9): Bad file descriptor 00:16:37.456 passed 00:16:37.456 Test: blockdev write read 8 blocks ...passed 00:16:37.456 Test: blockdev write read size > 128k ...passed 00:16:37.456 Test: blockdev write read invalid size ...passed 00:16:37.456 Test: blockdev write read offset + nbytes == size of blockdev ...[2024-07-13 00:25:24.676863] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:37.714 passed 00:16:37.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:37.714 Test: blockdev write read max offset ...passed 00:16:37.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:37.714 Test: blockdev writev readv 8 blocks ...passed 00:16:37.714 Test: blockdev writev readv 30 x 1block ...passed 00:16:37.714 Test: blockdev writev readv block ...passed 00:16:37.714 Test: blockdev writev readv size > 128k ...passed 00:16:37.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:37.715 Test: blockdev comparev and writev ...[2024-07-13 00:25:24.852240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:37.715 [2024-07-13 00:25:24.852298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:37.715 [2024-07-13 00:25:24.852318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:37.715 [2024-07-13 00:25:24.852329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:37.715 [2024-07-13 00:25:24.853037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:37.715 [2024-07-13 00:25:24.853061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:37.715 [2024-07-13 00:25:24.853079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:37.715 [2024-07-13 00:25:24.853090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:37.715 [2024-07-13 00:25:24.854091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:37.715 [2024-07-13 00:25:24.854123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:37.715 [2024-07-13 00:25:24.854140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:37.715 [2024-07-13 00:25:24.854151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:37.715 [2024-07-13 00:25:24.854738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:37.715 [2024-07-13 00:25:24.854761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:37.715 [2024-07-13 00:25:24.854778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:37.715 [2024-07-13 00:25:24.854788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:37.715 passed 00:16:37.715 Test: blockdev nvme passthru rw ...passed 00:16:37.715 Test: blockdev nvme passthru vendor specific ...[2024-07-13 00:25:24.937020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:37.715 [2024-07-13 00:25:24.937060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:37.715 [2024-07-13 00:25:24.937447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:37.715 [2024-07-13 00:25:24.937476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:37.715 [2024-07-13 00:25:24.937716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:37.715 [2024-07-13 00:25:24.937743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:37.715 [2024-07-13 00:25:24.938055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:37.715 [2024-07-13 00:25:24.938082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:37.715 passed 00:16:37.974 Test: blockdev nvme admin passthru ...passed 00:16:37.974 Test: blockdev copy ...passed 00:16:37.974 00:16:37.974 Run Summary: Type Total Ran Passed Failed Inactive 00:16:37.974 suites 1 1 n/a 0 0 00:16:37.974 tests 23 23 23 0 0 00:16:37.974 asserts 152 152 152 0 n/a 00:16:37.974 00:16:37.974 Elapsed time = 0.913 seconds 00:16:37.974 00:25:25 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.974 00:25:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:37.974 00:25:25 -- common/autotest_common.sh@10 -- # set +x 00:16:37.974 00:25:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:37.974 00:25:25 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:37.974 00:25:25 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:37.974 00:25:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:37.974 00:25:25 -- nvmf/common.sh@116 -- # sync 00:16:38.233 00:25:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:38.233 00:25:25 -- nvmf/common.sh@119 -- # set +e 00:16:38.233 00:25:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:38.233 00:25:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:38.233 rmmod nvme_tcp 00:16:38.233 rmmod nvme_fabrics 00:16:38.233 rmmod nvme_keyring 00:16:38.233 00:25:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:38.233 00:25:25 -- nvmf/common.sh@123 -- # set -e 00:16:38.233 00:25:25 -- nvmf/common.sh@124 -- # return 0 00:16:38.233 00:25:25 -- nvmf/common.sh@477 -- # '[' -n 87355 ']' 00:16:38.233 00:25:25 -- nvmf/common.sh@478 -- # killprocess 87355 00:16:38.233 00:25:25 -- common/autotest_common.sh@926 -- # '[' -z 87355 ']' 00:16:38.233 00:25:25 -- common/autotest_common.sh@930 -- # kill -0 87355 00:16:38.233 00:25:25 -- common/autotest_common.sh@931 -- # uname 00:16:38.233 00:25:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:38.233 00:25:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87355 00:16:38.233 00:25:25 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:38.233 00:25:25 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:38.233 killing process with pid 87355 00:16:38.233 00:25:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87355' 00:16:38.233 00:25:25 -- common/autotest_common.sh@945 -- # kill 87355 00:16:38.233 00:25:25 -- common/autotest_common.sh@950 -- # wait 87355 00:16:38.491 00:25:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:38.491 00:25:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:38.491 00:25:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:38.491 00:25:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.491 00:25:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:38.491 00:25:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.491 00:25:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.491 00:25:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.491 00:25:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:38.491 00:16:38.491 real 0m3.264s 00:16:38.491 user 0m11.641s 00:16:38.491 sys 0m0.888s 00:16:38.491 00:25:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.491 ************************************ 00:16:38.491 END TEST nvmf_bdevio 00:16:38.491 ************************************ 00:16:38.491 00:25:25 -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 00:25:25 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:38.751 00:25:25 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:38.751 00:25:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:38.751 00:25:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:38.751 00:25:25 -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 ************************************ 00:16:38.751 START TEST nvmf_bdevio_no_huge 00:16:38.751 ************************************ 00:16:38.751 00:25:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:38.751 * Looking for test storage... 00:16:38.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:38.751 00:25:25 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:38.751 00:25:25 -- nvmf/common.sh@7 -- # uname -s 00:16:38.751 00:25:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.751 00:25:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.751 00:25:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.751 00:25:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.751 00:25:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.751 00:25:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.751 00:25:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.751 00:25:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.751 00:25:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.751 00:25:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.751 00:25:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:16:38.751 00:25:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:16:38.751 00:25:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.751 00:25:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.751 00:25:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:38.751 00:25:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.751 00:25:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.751 00:25:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.751 00:25:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.751 00:25:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.751 00:25:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.751 00:25:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.751 00:25:25 -- paths/export.sh@5 -- # export PATH 00:16:38.751 00:25:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.751 00:25:25 -- nvmf/common.sh@46 -- # : 0 00:16:38.751 00:25:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:38.751 00:25:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:38.751 00:25:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:38.751 00:25:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.751 00:25:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.751 00:25:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:38.751 00:25:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:38.751 00:25:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:38.751 00:25:25 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.751 00:25:25 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.751 00:25:25 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:38.751 00:25:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:38.751 00:25:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.751 00:25:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:38.751 00:25:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:38.751 00:25:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:38.751 00:25:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.751 00:25:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.751 00:25:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.751 00:25:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:38.751 00:25:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:38.751 00:25:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:38.751 00:25:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:38.751 00:25:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:38.751 00:25:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:38.751 00:25:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.751 00:25:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.751 00:25:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:38.751 00:25:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:38.751 00:25:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:38.751 00:25:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:38.751 00:25:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:38.751 00:25:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.751 00:25:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:38.751 00:25:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:38.751 00:25:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:38.751 00:25:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:38.751 00:25:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:38.751 00:25:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:38.751 Cannot find device "nvmf_tgt_br" 00:16:38.751 00:25:25 -- nvmf/common.sh@154 -- # true 00:16:38.751 00:25:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.751 Cannot find device "nvmf_tgt_br2" 00:16:38.751 00:25:25 -- nvmf/common.sh@155 -- # true 00:16:38.751 00:25:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:38.751 00:25:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:38.751 Cannot find device "nvmf_tgt_br" 00:16:38.751 00:25:25 -- nvmf/common.sh@157 -- # true 00:16:38.751 00:25:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:38.751 Cannot find device "nvmf_tgt_br2" 00:16:38.751 00:25:25 -- nvmf/common.sh@158 -- # true 00:16:38.751 00:25:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:38.751 00:25:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:38.751 00:25:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.751 00:25:25 -- nvmf/common.sh@161 -- # true 00:16:38.751 00:25:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.751 00:25:25 -- nvmf/common.sh@162 -- # true 00:16:38.751 00:25:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:38.751 00:25:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:38.751 00:25:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:38.751 00:25:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:39.011 00:25:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:39.011 00:25:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:39.011 00:25:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:39.011 00:25:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:39.011 00:25:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:39.011 00:25:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:39.011 00:25:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:39.011 00:25:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:39.011 00:25:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:39.011 00:25:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:39.011 00:25:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:39.011 00:25:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:39.011 00:25:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:39.011 00:25:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:39.011 00:25:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:39.011 00:25:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:39.011 00:25:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:39.011 00:25:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:39.011 00:25:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:39.011 00:25:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:39.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:16:39.011 00:16:39.011 --- 10.0.0.2 ping statistics --- 00:16:39.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.011 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:39.011 00:25:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:39.011 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:39.011 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:39.011 00:16:39.011 --- 10.0.0.3 ping statistics --- 00:16:39.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.011 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:39.011 00:25:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:39.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:39.011 00:16:39.011 --- 10.0.0.1 ping statistics --- 00:16:39.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.011 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:39.011 00:25:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.011 00:25:26 -- nvmf/common.sh@421 -- # return 0 00:16:39.011 00:25:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:39.011 00:25:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.011 00:25:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:39.011 00:25:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:39.011 00:25:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.011 00:25:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:39.011 00:25:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:39.011 00:25:26 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:39.011 00:25:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:39.011 00:25:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:39.011 00:25:26 -- common/autotest_common.sh@10 -- # set +x 00:16:39.011 00:25:26 -- nvmf/common.sh@469 -- # nvmfpid=87595 00:16:39.011 00:25:26 -- nvmf/common.sh@470 -- # waitforlisten 87595 00:16:39.011 00:25:26 -- common/autotest_common.sh@819 -- # '[' -z 87595 ']' 00:16:39.011 00:25:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.011 00:25:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:39.011 00:25:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:39.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.011 00:25:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.011 00:25:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:39.011 00:25:26 -- common/autotest_common.sh@10 -- # set +x 00:16:39.270 [2024-07-13 00:25:26.258860] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:39.270 [2024-07-13 00:25:26.258976] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:39.270 [2024-07-13 00:25:26.406920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.270 [2024-07-13 00:25:26.493704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:39.270 [2024-07-13 00:25:26.494127] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.270 [2024-07-13 00:25:26.494179] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.270 [2024-07-13 00:25:26.494307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.270 [2024-07-13 00:25:26.494788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:39.270 [2024-07-13 00:25:26.494922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:39.270 [2024-07-13 00:25:26.495177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:39.270 [2024-07-13 00:25:26.495187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.207 00:25:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:40.207 00:25:27 -- common/autotest_common.sh@852 -- # return 0 00:16:40.207 00:25:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:40.207 00:25:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:40.207 00:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:40.207 00:25:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.207 00:25:27 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.207 00:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.207 00:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:40.207 [2024-07-13 00:25:27.228667] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.207 00:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.207 00:25:27 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:40.207 00:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.207 00:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:40.207 Malloc0 00:16:40.207 00:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.207 00:25:27 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:40.207 00:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.207 00:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:40.207 00:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.207 00:25:27 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.207 00:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.207 00:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:40.207 00:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.207 00:25:27 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.207 00:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.207 00:25:27 -- common/autotest_common.sh@10 -- # set +x 00:16:40.207 [2024-07-13 00:25:27.273003] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.207 00:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:40.207 00:25:27 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:40.207 00:25:27 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:40.207 00:25:27 -- nvmf/common.sh@520 -- # config=() 00:16:40.207 00:25:27 -- nvmf/common.sh@520 -- # local subsystem config 00:16:40.207 00:25:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:40.207 00:25:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:40.207 { 00:16:40.207 "params": { 00:16:40.207 "name": "Nvme$subsystem", 00:16:40.207 "trtype": "$TEST_TRANSPORT", 00:16:40.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.207 "adrfam": "ipv4", 00:16:40.207 "trsvcid": "$NVMF_PORT", 00:16:40.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.207 "hdgst": ${hdgst:-false}, 00:16:40.207 "ddgst": ${ddgst:-false} 00:16:40.207 }, 00:16:40.207 "method": "bdev_nvme_attach_controller" 00:16:40.207 } 00:16:40.207 EOF 00:16:40.207 )") 00:16:40.207 00:25:27 -- nvmf/common.sh@542 -- # cat 00:16:40.207 00:25:27 -- nvmf/common.sh@544 -- # jq . 00:16:40.207 00:25:27 -- nvmf/common.sh@545 -- # IFS=, 00:16:40.207 00:25:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:40.207 "params": { 00:16:40.207 "name": "Nvme1", 00:16:40.207 "trtype": "tcp", 00:16:40.207 "traddr": "10.0.0.2", 00:16:40.207 "adrfam": "ipv4", 00:16:40.207 "trsvcid": "4420", 00:16:40.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:40.207 "hdgst": false, 00:16:40.207 "ddgst": false 00:16:40.207 }, 00:16:40.207 "method": "bdev_nvme_attach_controller" 00:16:40.207 }' 00:16:40.207 [2024-07-13 00:25:27.330042] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:40.207 [2024-07-13 00:25:27.330139] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87649 ] 00:16:40.466 [2024-07-13 00:25:27.471547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:40.466 [2024-07-13 00:25:27.614725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.466 [2024-07-13 00:25:27.614876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.466 [2024-07-13 00:25:27.614882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.724 [2024-07-13 00:25:27.793121] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:40.724 [2024-07-13 00:25:27.793159] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:40.724 I/O targets: 00:16:40.724 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:40.724 00:16:40.724 00:16:40.724 CUnit - A unit testing framework for C - Version 2.1-3 00:16:40.724 http://cunit.sourceforge.net/ 00:16:40.724 00:16:40.724 00:16:40.724 Suite: bdevio tests on: Nvme1n1 00:16:40.724 Test: blockdev write read block ...passed 00:16:40.724 Test: blockdev write zeroes read block ...passed 00:16:40.724 Test: blockdev write zeroes read no split ...passed 00:16:40.724 Test: blockdev write zeroes read split ...passed 00:16:40.724 Test: blockdev write zeroes read split partial ...passed 00:16:40.724 Test: blockdev reset ...[2024-07-13 00:25:27.922730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:40.724 [2024-07-13 00:25:27.922819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e03160 (9): Bad file descriptor 00:16:40.724 [2024-07-13 00:25:27.943250] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:40.724 passed 00:16:40.724 Test: blockdev write read 8 blocks ...passed 00:16:40.724 Test: blockdev write read size > 128k ...passed 00:16:40.724 Test: blockdev write read invalid size ...passed 00:16:40.982 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:40.982 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:40.982 Test: blockdev write read max offset ...passed 00:16:40.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:40.982 Test: blockdev writev readv 8 blocks ...passed 00:16:40.982 Test: blockdev writev readv 30 x 1block ...passed 00:16:40.982 Test: blockdev writev readv block ...passed 00:16:40.982 Test: blockdev writev readv size > 128k ...passed 00:16:40.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:40.982 Test: blockdev comparev and writev ...[2024-07-13 00:25:28.121118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.982 [2024-07-13 00:25:28.121316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.982 [2024-07-13 00:25:28.121429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.982 [2024-07-13 00:25:28.121507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:40.982 [2024-07-13 00:25:28.122099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.982 [2024-07-13 00:25:28.122231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:40.982 [2024-07-13 00:25:28.122304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.982 [2024-07-13 00:25:28.122383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:40.982 [2024-07-13 00:25:28.123020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.982 [2024-07-13 00:25:28.123147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:40.982 [2024-07-13 00:25:28.123218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.982 [2024-07-13 00:25:28.123281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:40.982 [2024-07-13 00:25:28.123919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.982 [2024-07-13 00:25:28.124029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:40.982 [2024-07-13 00:25:28.124100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.982 [2024-07-13 00:25:28.124164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:40.982 passed 00:16:40.982 Test: blockdev nvme passthru rw ...passed 00:16:40.982 Test: blockdev nvme passthru vendor specific ...[2024-07-13 00:25:28.207986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.982 [2024-07-13 00:25:28.208149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:40.983 [2024-07-13 00:25:28.208595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.983 [2024-07-13 00:25:28.208714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:40.983 [2024-07-13 00:25:28.208906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.983 [2024-07-13 00:25:28.208977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:40.983 [2024-07-13 00:25:28.209300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.983 [2024-07-13 00:25:28.209396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:40.983 passed 00:16:41.240 Test: blockdev nvme admin passthru ...passed 00:16:41.240 Test: blockdev copy ...passed 00:16:41.240 00:16:41.240 Run Summary: Type Total Ran Passed Failed Inactive 00:16:41.240 suites 1 1 n/a 0 0 00:16:41.240 tests 23 23 23 0 0 00:16:41.240 asserts 152 152 152 0 n/a 00:16:41.240 00:16:41.240 Elapsed time = 0.947 seconds 00:16:41.499 00:25:28 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.499 00:25:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:41.499 00:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:41.499 00:25:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:41.499 00:25:28 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:41.499 00:25:28 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:41.499 00:25:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:41.499 00:25:28 -- nvmf/common.sh@116 -- # sync 00:16:41.499 00:25:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:41.499 00:25:28 -- nvmf/common.sh@119 -- # set +e 00:16:41.499 00:25:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:41.499 00:25:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:41.499 rmmod nvme_tcp 00:16:41.499 rmmod nvme_fabrics 00:16:41.499 rmmod nvme_keyring 00:16:41.499 00:25:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:41.499 00:25:28 -- nvmf/common.sh@123 -- # set -e 00:16:41.499 00:25:28 -- nvmf/common.sh@124 -- # return 0 00:16:41.499 00:25:28 -- nvmf/common.sh@477 -- # '[' -n 87595 ']' 00:16:41.499 00:25:28 -- nvmf/common.sh@478 -- # killprocess 87595 00:16:41.499 00:25:28 -- common/autotest_common.sh@926 -- # '[' -z 87595 ']' 00:16:41.499 00:25:28 -- common/autotest_common.sh@930 -- # kill -0 87595 00:16:41.499 00:25:28 -- common/autotest_common.sh@931 -- # uname 00:16:41.757 00:25:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:41.757 00:25:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87595 00:16:41.757 killing process with pid 87595 00:16:41.757 00:25:28 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:41.757 00:25:28 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:41.757 00:25:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87595' 00:16:41.757 00:25:28 -- common/autotest_common.sh@945 -- # kill 87595 00:16:41.757 00:25:28 -- common/autotest_common.sh@950 -- # wait 87595 00:16:42.015 00:25:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:42.015 00:25:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:42.015 00:25:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:42.015 00:25:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.015 00:25:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:42.015 00:25:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.015 00:25:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.015 00:25:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.015 00:25:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:42.015 00:16:42.015 real 0m3.395s 00:16:42.015 user 0m12.221s 00:16:42.015 sys 0m1.270s 00:16:42.015 00:25:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.015 ************************************ 00:16:42.015 00:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:42.015 END TEST nvmf_bdevio_no_huge 00:16:42.015 ************************************ 00:16:42.015 00:25:29 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:42.015 00:25:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:42.015 00:25:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:42.015 00:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:42.015 ************************************ 00:16:42.015 START TEST nvmf_tls 00:16:42.015 ************************************ 00:16:42.015 00:25:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:42.273 * Looking for test storage... 00:16:42.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:42.273 00:25:29 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.273 00:25:29 -- nvmf/common.sh@7 -- # uname -s 00:16:42.273 00:25:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.273 00:25:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.273 00:25:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.273 00:25:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.273 00:25:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.273 00:25:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.273 00:25:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.273 00:25:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.273 00:25:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.273 00:25:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.273 00:25:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:16:42.273 00:25:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:16:42.273 00:25:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.273 00:25:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.273 00:25:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.273 00:25:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.273 00:25:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.273 00:25:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.273 00:25:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.273 00:25:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.273 00:25:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.273 00:25:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.273 00:25:29 -- paths/export.sh@5 -- # export PATH 00:16:42.273 00:25:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.273 00:25:29 -- nvmf/common.sh@46 -- # : 0 00:16:42.274 00:25:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:42.274 00:25:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:42.274 00:25:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:42.274 00:25:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.274 00:25:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.274 00:25:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:42.274 00:25:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:42.274 00:25:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:42.274 00:25:29 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:42.274 00:25:29 -- target/tls.sh@71 -- # nvmftestinit 00:16:42.274 00:25:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:42.274 00:25:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.274 00:25:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:42.274 00:25:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:42.274 00:25:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:42.274 00:25:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.274 00:25:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.274 00:25:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.274 00:25:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:42.274 00:25:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:42.274 00:25:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:42.274 00:25:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:42.274 00:25:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:42.274 00:25:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:42.274 00:25:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.274 00:25:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.274 00:25:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:42.274 00:25:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:42.274 00:25:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:42.274 00:25:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:42.274 00:25:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:42.274 00:25:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.274 00:25:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:42.274 00:25:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:42.274 00:25:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:42.274 00:25:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:42.274 00:25:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:42.274 00:25:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:42.274 Cannot find device "nvmf_tgt_br" 00:16:42.274 00:25:29 -- nvmf/common.sh@154 -- # true 00:16:42.274 00:25:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.274 Cannot find device "nvmf_tgt_br2" 00:16:42.274 00:25:29 -- nvmf/common.sh@155 -- # true 00:16:42.274 00:25:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:42.274 00:25:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:42.274 Cannot find device "nvmf_tgt_br" 00:16:42.274 00:25:29 -- nvmf/common.sh@157 -- # true 00:16:42.274 00:25:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:42.274 Cannot find device "nvmf_tgt_br2" 00:16:42.274 00:25:29 -- nvmf/common.sh@158 -- # true 00:16:42.274 00:25:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:42.274 00:25:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:42.274 00:25:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.274 00:25:29 -- nvmf/common.sh@161 -- # true 00:16:42.274 00:25:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.274 00:25:29 -- nvmf/common.sh@162 -- # true 00:16:42.274 00:25:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:42.274 00:25:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:42.274 00:25:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:42.274 00:25:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:42.274 00:25:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:42.274 00:25:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:42.274 00:25:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:42.274 00:25:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:42.274 00:25:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:42.532 00:25:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:42.532 00:25:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:42.532 00:25:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:42.532 00:25:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:42.532 00:25:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:42.532 00:25:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:42.532 00:25:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:42.532 00:25:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:42.532 00:25:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:42.532 00:25:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:42.532 00:25:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:42.532 00:25:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:42.532 00:25:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:42.532 00:25:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:42.532 00:25:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:42.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:42.532 00:16:42.532 --- 10.0.0.2 ping statistics --- 00:16:42.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.532 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:42.532 00:25:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:42.532 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:42.532 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:16:42.532 00:16:42.532 --- 10.0.0.3 ping statistics --- 00:16:42.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.532 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:42.532 00:25:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:42.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:42.532 00:16:42.532 --- 10.0.0.1 ping statistics --- 00:16:42.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.532 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:42.532 00:25:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.532 00:25:29 -- nvmf/common.sh@421 -- # return 0 00:16:42.532 00:25:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:42.532 00:25:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.532 00:25:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:42.532 00:25:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:42.532 00:25:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.532 00:25:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:42.532 00:25:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:42.532 00:25:29 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:42.532 00:25:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:42.532 00:25:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:42.532 00:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:42.532 00:25:29 -- nvmf/common.sh@469 -- # nvmfpid=87830 00:16:42.532 00:25:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:42.532 00:25:29 -- nvmf/common.sh@470 -- # waitforlisten 87830 00:16:42.532 00:25:29 -- common/autotest_common.sh@819 -- # '[' -z 87830 ']' 00:16:42.532 00:25:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.532 00:25:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:42.532 00:25:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.532 00:25:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:42.532 00:25:29 -- common/autotest_common.sh@10 -- # set +x 00:16:42.532 [2024-07-13 00:25:29.696061] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:42.532 [2024-07-13 00:25:29.696133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.790 [2024-07-13 00:25:29.837019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.790 [2024-07-13 00:25:29.922824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:42.790 [2024-07-13 00:25:29.923017] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.790 [2024-07-13 00:25:29.923033] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.790 [2024-07-13 00:25:29.923044] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.790 [2024-07-13 00:25:29.923091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.722 00:25:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:43.722 00:25:30 -- common/autotest_common.sh@852 -- # return 0 00:16:43.722 00:25:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:43.722 00:25:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:43.722 00:25:30 -- common/autotest_common.sh@10 -- # set +x 00:16:43.722 00:25:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.722 00:25:30 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:43.722 00:25:30 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:43.722 true 00:16:43.722 00:25:30 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:43.722 00:25:30 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:43.979 00:25:31 -- target/tls.sh@82 -- # version=0 00:16:43.979 00:25:31 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:43.979 00:25:31 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:44.236 00:25:31 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:44.236 00:25:31 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:44.493 00:25:31 -- target/tls.sh@90 -- # version=13 00:16:44.493 00:25:31 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:44.493 00:25:31 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:44.751 00:25:31 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:44.751 00:25:31 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:45.009 00:25:32 -- target/tls.sh@98 -- # version=7 00:16:45.009 00:25:32 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:45.009 00:25:32 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:45.009 00:25:32 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:45.267 00:25:32 -- target/tls.sh@105 -- # ktls=false 00:16:45.267 00:25:32 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:45.267 00:25:32 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:45.525 00:25:32 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:45.525 00:25:32 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:45.783 00:25:32 -- target/tls.sh@113 -- # ktls=true 00:16:45.783 00:25:32 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:45.783 00:25:32 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:46.042 00:25:33 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:46.042 00:25:33 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:46.301 00:25:33 -- target/tls.sh@121 -- # ktls=false 00:16:46.301 00:25:33 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:46.301 00:25:33 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:46.301 00:25:33 -- target/tls.sh@49 -- # local key hash crc 00:16:46.301 00:25:33 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:46.301 00:25:33 -- target/tls.sh@51 -- # hash=01 00:16:46.301 00:25:33 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:46.301 00:25:33 -- target/tls.sh@52 -- # gzip -1 -c 00:16:46.301 00:25:33 -- target/tls.sh@52 -- # head -c 4 00:16:46.301 00:25:33 -- target/tls.sh@52 -- # tail -c8 00:16:46.301 00:25:33 -- target/tls.sh@52 -- # crc='p$H�' 00:16:46.301 00:25:33 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:46.301 00:25:33 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:46.301 00:25:33 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:46.301 00:25:33 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:46.301 00:25:33 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:46.301 00:25:33 -- target/tls.sh@49 -- # local key hash crc 00:16:46.301 00:25:33 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:46.301 00:25:33 -- target/tls.sh@51 -- # hash=01 00:16:46.301 00:25:33 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:46.301 00:25:33 -- target/tls.sh@52 -- # gzip -1 -c 00:16:46.301 00:25:33 -- target/tls.sh@52 -- # tail -c8 00:16:46.301 00:25:33 -- target/tls.sh@52 -- # head -c 4 00:16:46.301 00:25:33 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:46.301 00:25:33 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:46.301 00:25:33 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:46.301 00:25:33 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:46.301 00:25:33 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:46.301 00:25:33 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.301 00:25:33 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:46.301 00:25:33 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:46.301 00:25:33 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:46.301 00:25:33 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.301 00:25:33 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:46.301 00:25:33 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:46.560 00:25:33 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:46.819 00:25:33 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.819 00:25:33 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.819 00:25:33 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:47.077 [2024-07-13 00:25:34.155197] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.077 00:25:34 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:47.336 00:25:34 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:47.595 [2024-07-13 00:25:34.623273] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:47.595 [2024-07-13 00:25:34.623526] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.595 00:25:34 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:47.852 malloc0 00:16:47.852 00:25:34 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:48.109 00:25:35 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:48.367 00:25:35 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:58.366 Initializing NVMe Controllers 00:16:58.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:58.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:58.366 Initialization complete. Launching workers. 00:16:58.366 ======================================================== 00:16:58.366 Latency(us) 00:16:58.366 Device Information : IOPS MiB/s Average min max 00:16:58.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10838.85 42.34 5905.88 1487.81 13540.56 00:16:58.366 ======================================================== 00:16:58.366 Total : 10838.85 42.34 5905.88 1487.81 13540.56 00:16:58.366 00:16:58.366 00:25:45 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:58.366 00:25:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:58.366 00:25:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:58.366 00:25:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:58.366 00:25:45 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:58.366 00:25:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:58.366 00:25:45 -- target/tls.sh@28 -- # bdevperf_pid=88200 00:16:58.366 00:25:45 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:58.366 00:25:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:58.366 00:25:45 -- target/tls.sh@31 -- # waitforlisten 88200 /var/tmp/bdevperf.sock 00:16:58.366 00:25:45 -- common/autotest_common.sh@819 -- # '[' -z 88200 ']' 00:16:58.366 00:25:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.366 00:25:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:58.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.366 00:25:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.366 00:25:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:58.366 00:25:45 -- common/autotest_common.sh@10 -- # set +x 00:16:58.624 [2024-07-13 00:25:45.610289] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:58.624 [2024-07-13 00:25:45.610383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88200 ] 00:16:58.624 [2024-07-13 00:25:45.752937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.882 [2024-07-13 00:25:45.861802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.447 00:25:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:59.447 00:25:46 -- common/autotest_common.sh@852 -- # return 0 00:16:59.447 00:25:46 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:59.705 [2024-07-13 00:25:46.829261] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:59.705 TLSTESTn1 00:16:59.705 00:25:46 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:59.962 Running I/O for 10 seconds... 00:17:09.934 00:17:09.934 Latency(us) 00:17:09.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.934 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:09.934 Verification LBA range: start 0x0 length 0x2000 00:17:09.934 TLSTESTn1 : 10.01 6491.03 25.36 0.00 0.00 19688.97 5153.51 22997.18 00:17:09.934 =================================================================================================================== 00:17:09.934 Total : 6491.03 25.36 0.00 0.00 19688.97 5153.51 22997.18 00:17:09.934 0 00:17:09.934 00:25:57 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.934 00:25:57 -- target/tls.sh@45 -- # killprocess 88200 00:17:09.934 00:25:57 -- common/autotest_common.sh@926 -- # '[' -z 88200 ']' 00:17:09.934 00:25:57 -- common/autotest_common.sh@930 -- # kill -0 88200 00:17:09.934 00:25:57 -- common/autotest_common.sh@931 -- # uname 00:17:09.934 00:25:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:09.934 00:25:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88200 00:17:09.934 killing process with pid 88200 00:17:09.934 Received shutdown signal, test time was about 10.000000 seconds 00:17:09.934 00:17:09.934 Latency(us) 00:17:09.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.934 =================================================================================================================== 00:17:09.934 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.934 00:25:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:09.934 00:25:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:09.934 00:25:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88200' 00:17:09.934 00:25:57 -- common/autotest_common.sh@945 -- # kill 88200 00:17:09.934 00:25:57 -- common/autotest_common.sh@950 -- # wait 88200 00:17:10.193 00:25:57 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:10.193 00:25:57 -- common/autotest_common.sh@640 -- # local es=0 00:17:10.193 00:25:57 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:10.193 00:25:57 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:10.193 00:25:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:10.193 00:25:57 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:10.193 00:25:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:10.193 00:25:57 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:10.193 00:25:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:10.193 00:25:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:10.193 00:25:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:10.193 00:25:57 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:10.193 00:25:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.193 00:25:57 -- target/tls.sh@28 -- # bdevperf_pid=88355 00:17:10.193 00:25:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.193 00:25:57 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.193 00:25:57 -- target/tls.sh@31 -- # waitforlisten 88355 /var/tmp/bdevperf.sock 00:17:10.193 00:25:57 -- common/autotest_common.sh@819 -- # '[' -z 88355 ']' 00:17:10.193 00:25:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.193 00:25:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:10.193 00:25:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.193 00:25:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:10.193 00:25:57 -- common/autotest_common.sh@10 -- # set +x 00:17:10.193 [2024-07-13 00:25:57.411174] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:10.193 [2024-07-13 00:25:57.411291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88355 ] 00:17:10.452 [2024-07-13 00:25:57.552740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.452 [2024-07-13 00:25:57.662767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.387 00:25:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:11.387 00:25:58 -- common/autotest_common.sh@852 -- # return 0 00:17:11.387 00:25:58 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:11.387 [2024-07-13 00:25:58.575593] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.387 [2024-07-13 00:25:58.586290] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:11.387 [2024-07-13 00:25:58.586341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf9c0 (107): Transport endpoint is not connected 00:17:11.387 [2024-07-13 00:25:58.587332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf9c0 (9): Bad file descriptor 00:17:11.387 [2024-07-13 00:25:58.588330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:11.387 [2024-07-13 00:25:58.588349] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:11.387 [2024-07-13 00:25:58.588358] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:11.387 2024/07/13 00:25:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:11.387 request: 00:17:11.387 { 00:17:11.387 "method": "bdev_nvme_attach_controller", 00:17:11.387 "params": { 00:17:11.387 "name": "TLSTEST", 00:17:11.387 "trtype": "tcp", 00:17:11.387 "traddr": "10.0.0.2", 00:17:11.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.387 "adrfam": "ipv4", 00:17:11.387 "trsvcid": "4420", 00:17:11.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.387 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:11.387 } 00:17:11.387 } 00:17:11.387 Got JSON-RPC error response 00:17:11.387 GoRPCClient: error on JSON-RPC call 00:17:11.387 00:25:58 -- target/tls.sh@36 -- # killprocess 88355 00:17:11.387 00:25:58 -- common/autotest_common.sh@926 -- # '[' -z 88355 ']' 00:17:11.387 00:25:58 -- common/autotest_common.sh@930 -- # kill -0 88355 00:17:11.387 00:25:58 -- common/autotest_common.sh@931 -- # uname 00:17:11.645 00:25:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:11.645 00:25:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88355 00:17:11.645 killing process with pid 88355 00:17:11.645 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.645 00:17:11.645 Latency(us) 00:17:11.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.645 =================================================================================================================== 00:17:11.645 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.645 00:25:58 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:11.645 00:25:58 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:11.645 00:25:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88355' 00:17:11.645 00:25:58 -- common/autotest_common.sh@945 -- # kill 88355 00:17:11.645 00:25:58 -- common/autotest_common.sh@950 -- # wait 88355 00:17:11.903 00:25:58 -- target/tls.sh@37 -- # return 1 00:17:11.903 00:25:58 -- common/autotest_common.sh@643 -- # es=1 00:17:11.903 00:25:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:11.903 00:25:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:11.903 00:25:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:11.903 00:25:58 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:11.903 00:25:58 -- common/autotest_common.sh@640 -- # local es=0 00:17:11.904 00:25:58 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:11.904 00:25:58 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:11.904 00:25:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.904 00:25:58 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:11.904 00:25:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:11.904 00:25:58 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:11.904 00:25:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:11.904 00:25:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:11.904 00:25:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:11.904 00:25:58 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:11.904 00:25:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.904 00:25:58 -- target/tls.sh@28 -- # bdevperf_pid=88395 00:17:11.904 00:25:58 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:11.904 00:25:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.904 00:25:58 -- target/tls.sh@31 -- # waitforlisten 88395 /var/tmp/bdevperf.sock 00:17:11.904 00:25:58 -- common/autotest_common.sh@819 -- # '[' -z 88395 ']' 00:17:11.904 00:25:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.904 00:25:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:11.904 00:25:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.904 00:25:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:11.904 00:25:58 -- common/autotest_common.sh@10 -- # set +x 00:17:11.904 [2024-07-13 00:25:58.953814] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:11.904 [2024-07-13 00:25:58.953922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88395 ] 00:17:11.904 [2024-07-13 00:25:59.091019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.162 [2024-07-13 00:25:59.188930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.728 00:25:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:12.728 00:25:59 -- common/autotest_common.sh@852 -- # return 0 00:17:12.728 00:25:59 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:12.986 [2024-07-13 00:26:00.084161] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.986 [2024-07-13 00:26:00.089256] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:12.986 [2024-07-13 00:26:00.089290] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:12.986 [2024-07-13 00:26:00.089340] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:12.986 [2024-07-13 00:26:00.089965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6879c0 (107): Transport endpoint is not connected 00:17:12.986 [2024-07-13 00:26:00.090965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6879c0 (9): Bad file descriptor 00:17:12.986 [2024-07-13 00:26:00.091961] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:12.986 [2024-07-13 00:26:00.091993] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:12.986 [2024-07-13 00:26:00.092003] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:12.986 2024/07/13 00:26:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:12.986 request: 00:17:12.986 { 00:17:12.986 "method": "bdev_nvme_attach_controller", 00:17:12.986 "params": { 00:17:12.986 "name": "TLSTEST", 00:17:12.986 "trtype": "tcp", 00:17:12.986 "traddr": "10.0.0.2", 00:17:12.986 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:12.986 "adrfam": "ipv4", 00:17:12.986 "trsvcid": "4420", 00:17:12.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.986 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:12.986 } 00:17:12.986 } 00:17:12.986 Got JSON-RPC error response 00:17:12.986 GoRPCClient: error on JSON-RPC call 00:17:12.986 00:26:00 -- target/tls.sh@36 -- # killprocess 88395 00:17:12.986 00:26:00 -- common/autotest_common.sh@926 -- # '[' -z 88395 ']' 00:17:12.986 00:26:00 -- common/autotest_common.sh@930 -- # kill -0 88395 00:17:12.986 00:26:00 -- common/autotest_common.sh@931 -- # uname 00:17:12.986 00:26:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.986 00:26:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88395 00:17:12.986 killing process with pid 88395 00:17:12.986 Received shutdown signal, test time was about 10.000000 seconds 00:17:12.986 00:17:12.986 Latency(us) 00:17:12.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.986 =================================================================================================================== 00:17:12.986 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:12.986 00:26:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:12.986 00:26:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:12.986 00:26:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88395' 00:17:12.986 00:26:00 -- common/autotest_common.sh@945 -- # kill 88395 00:17:12.986 00:26:00 -- common/autotest_common.sh@950 -- # wait 88395 00:17:13.245 00:26:00 -- target/tls.sh@37 -- # return 1 00:17:13.245 00:26:00 -- common/autotest_common.sh@643 -- # es=1 00:17:13.245 00:26:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:13.245 00:26:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:13.245 00:26:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:13.245 00:26:00 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:13.245 00:26:00 -- common/autotest_common.sh@640 -- # local es=0 00:17:13.245 00:26:00 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:13.245 00:26:00 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:13.245 00:26:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:13.245 00:26:00 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:13.245 00:26:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:13.245 00:26:00 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:13.245 00:26:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.245 00:26:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:13.245 00:26:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.245 00:26:00 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:13.245 00:26:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.245 00:26:00 -- target/tls.sh@28 -- # bdevperf_pid=88436 00:17:13.245 00:26:00 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.245 00:26:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.245 00:26:00 -- target/tls.sh@31 -- # waitforlisten 88436 /var/tmp/bdevperf.sock 00:17:13.245 00:26:00 -- common/autotest_common.sh@819 -- # '[' -z 88436 ']' 00:17:13.245 00:26:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.245 00:26:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:13.245 00:26:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.245 00:26:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:13.245 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:17:13.245 [2024-07-13 00:26:00.445931] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:13.245 [2024-07-13 00:26:00.446046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88436 ] 00:17:13.504 [2024-07-13 00:26:00.579741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.504 [2024-07-13 00:26:00.666438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.438 00:26:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:14.438 00:26:01 -- common/autotest_common.sh@852 -- # return 0 00:17:14.438 00:26:01 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:14.438 [2024-07-13 00:26:01.567153] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:14.438 [2024-07-13 00:26:01.576485] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:14.438 [2024-07-13 00:26:01.576563] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:14.438 [2024-07-13 00:26:01.576627] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:14.438 [2024-07-13 00:26:01.576826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8259c0 (107): Transport endpoint is not connected 00:17:14.438 [2024-07-13 00:26:01.577816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8259c0 (9): Bad file descriptor 00:17:14.438 [2024-07-13 00:26:01.578813] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:14.438 [2024-07-13 00:26:01.578831] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:14.438 [2024-07-13 00:26:01.578841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:14.439 2024/07/13 00:26:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:14.439 request: 00:17:14.439 { 00:17:14.439 "method": "bdev_nvme_attach_controller", 00:17:14.439 "params": { 00:17:14.439 "name": "TLSTEST", 00:17:14.439 "trtype": "tcp", 00:17:14.439 "traddr": "10.0.0.2", 00:17:14.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.439 "adrfam": "ipv4", 00:17:14.439 "trsvcid": "4420", 00:17:14.439 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:14.439 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:14.439 } 00:17:14.439 } 00:17:14.439 Got JSON-RPC error response 00:17:14.439 GoRPCClient: error on JSON-RPC call 00:17:14.439 00:26:01 -- target/tls.sh@36 -- # killprocess 88436 00:17:14.439 00:26:01 -- common/autotest_common.sh@926 -- # '[' -z 88436 ']' 00:17:14.439 00:26:01 -- common/autotest_common.sh@930 -- # kill -0 88436 00:17:14.439 00:26:01 -- common/autotest_common.sh@931 -- # uname 00:17:14.439 00:26:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:14.439 00:26:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88436 00:17:14.439 killing process with pid 88436 00:17:14.439 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.439 00:17:14.439 Latency(us) 00:17:14.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.439 =================================================================================================================== 00:17:14.439 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.439 00:26:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:14.439 00:26:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:14.439 00:26:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88436' 00:17:14.439 00:26:01 -- common/autotest_common.sh@945 -- # kill 88436 00:17:14.439 00:26:01 -- common/autotest_common.sh@950 -- # wait 88436 00:17:14.697 00:26:01 -- target/tls.sh@37 -- # return 1 00:17:14.697 00:26:01 -- common/autotest_common.sh@643 -- # es=1 00:17:14.697 00:26:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:14.697 00:26:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:14.698 00:26:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:14.698 00:26:01 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:14.698 00:26:01 -- common/autotest_common.sh@640 -- # local es=0 00:17:14.698 00:26:01 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:14.698 00:26:01 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:14.698 00:26:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:14.698 00:26:01 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:14.698 00:26:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:14.698 00:26:01 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:14.698 00:26:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:14.698 00:26:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:14.698 00:26:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:14.698 00:26:01 -- target/tls.sh@23 -- # psk= 00:17:14.698 00:26:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:14.698 00:26:01 -- target/tls.sh@28 -- # bdevperf_pid=88486 00:17:14.698 00:26:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:14.698 00:26:01 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:14.698 00:26:01 -- target/tls.sh@31 -- # waitforlisten 88486 /var/tmp/bdevperf.sock 00:17:14.698 00:26:01 -- common/autotest_common.sh@819 -- # '[' -z 88486 ']' 00:17:14.698 00:26:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.698 00:26:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:14.698 00:26:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.698 00:26:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:14.698 00:26:01 -- common/autotest_common.sh@10 -- # set +x 00:17:14.956 [2024-07-13 00:26:01.945391] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:14.956 [2024-07-13 00:26:01.945495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88486 ] 00:17:14.957 [2024-07-13 00:26:02.084343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.957 [2024-07-13 00:26:02.162931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.892 00:26:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:15.892 00:26:02 -- common/autotest_common.sh@852 -- # return 0 00:17:15.892 00:26:02 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:15.892 [2024-07-13 00:26:02.996749] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:15.892 [2024-07-13 00:26:02.998571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121f5e0 (9): Bad file descriptor 00:17:15.892 [2024-07-13 00:26:02.999566] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:15.892 [2024-07-13 00:26:02.999587] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:15.892 [2024-07-13 00:26:02.999596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:15.892 2024/07/13 00:26:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:15.892 request: 00:17:15.892 { 00:17:15.892 "method": "bdev_nvme_attach_controller", 00:17:15.892 "params": { 00:17:15.892 "name": "TLSTEST", 00:17:15.892 "trtype": "tcp", 00:17:15.892 "traddr": "10.0.0.2", 00:17:15.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.892 "adrfam": "ipv4", 00:17:15.892 "trsvcid": "4420", 00:17:15.892 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:15.892 } 00:17:15.892 } 00:17:15.892 Got JSON-RPC error response 00:17:15.892 GoRPCClient: error on JSON-RPC call 00:17:15.892 00:26:03 -- target/tls.sh@36 -- # killprocess 88486 00:17:15.892 00:26:03 -- common/autotest_common.sh@926 -- # '[' -z 88486 ']' 00:17:15.892 00:26:03 -- common/autotest_common.sh@930 -- # kill -0 88486 00:17:15.892 00:26:03 -- common/autotest_common.sh@931 -- # uname 00:17:15.892 00:26:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:15.892 00:26:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88486 00:17:15.892 killing process with pid 88486 00:17:15.892 Received shutdown signal, test time was about 10.000000 seconds 00:17:15.892 00:17:15.892 Latency(us) 00:17:15.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.892 =================================================================================================================== 00:17:15.892 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:15.892 00:26:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:15.892 00:26:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:15.892 00:26:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88486' 00:17:15.892 00:26:03 -- common/autotest_common.sh@945 -- # kill 88486 00:17:15.892 00:26:03 -- common/autotest_common.sh@950 -- # wait 88486 00:17:16.150 00:26:03 -- target/tls.sh@37 -- # return 1 00:17:16.150 00:26:03 -- common/autotest_common.sh@643 -- # es=1 00:17:16.150 00:26:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:16.150 00:26:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:16.150 00:26:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:16.150 00:26:03 -- target/tls.sh@167 -- # killprocess 87830 00:17:16.150 00:26:03 -- common/autotest_common.sh@926 -- # '[' -z 87830 ']' 00:17:16.150 00:26:03 -- common/autotest_common.sh@930 -- # kill -0 87830 00:17:16.150 00:26:03 -- common/autotest_common.sh@931 -- # uname 00:17:16.150 00:26:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:16.150 00:26:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87830 00:17:16.150 killing process with pid 87830 00:17:16.150 00:26:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:16.150 00:26:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:16.150 00:26:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87830' 00:17:16.150 00:26:03 -- common/autotest_common.sh@945 -- # kill 87830 00:17:16.150 00:26:03 -- common/autotest_common.sh@950 -- # wait 87830 00:17:16.408 00:26:03 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:16.408 00:26:03 -- target/tls.sh@49 -- # local key hash crc 00:17:16.408 00:26:03 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:16.408 00:26:03 -- target/tls.sh@51 -- # hash=02 00:17:16.408 00:26:03 -- target/tls.sh@52 -- # gzip -1 -c 00:17:16.408 00:26:03 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:16.408 00:26:03 -- target/tls.sh@52 -- # head -c 4 00:17:16.408 00:26:03 -- target/tls.sh@52 -- # tail -c8 00:17:16.408 00:26:03 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:16.408 00:26:03 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:16.408 00:26:03 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:16.408 00:26:03 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:16.408 00:26:03 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:16.408 00:26:03 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.408 00:26:03 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:16.408 00:26:03 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.408 00:26:03 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:16.408 00:26:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:16.408 00:26:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:16.408 00:26:03 -- common/autotest_common.sh@10 -- # set +x 00:17:16.408 00:26:03 -- nvmf/common.sh@469 -- # nvmfpid=88547 00:17:16.408 00:26:03 -- nvmf/common.sh@470 -- # waitforlisten 88547 00:17:16.408 00:26:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:16.408 00:26:03 -- common/autotest_common.sh@819 -- # '[' -z 88547 ']' 00:17:16.408 00:26:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.408 00:26:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:16.408 00:26:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.408 00:26:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:16.408 00:26:03 -- common/autotest_common.sh@10 -- # set +x 00:17:16.666 [2024-07-13 00:26:03.688360] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:16.666 [2024-07-13 00:26:03.688476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.666 [2024-07-13 00:26:03.828315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.925 [2024-07-13 00:26:03.914543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:16.925 [2024-07-13 00:26:03.914726] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.925 [2024-07-13 00:26:03.914747] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.925 [2024-07-13 00:26:03.914755] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.925 [2024-07-13 00:26:03.914790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.491 00:26:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:17.491 00:26:04 -- common/autotest_common.sh@852 -- # return 0 00:17:17.491 00:26:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:17.491 00:26:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:17.491 00:26:04 -- common/autotest_common.sh@10 -- # set +x 00:17:17.491 00:26:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.491 00:26:04 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.491 00:26:04 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.491 00:26:04 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:17.749 [2024-07-13 00:26:04.941003] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.749 00:26:04 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:18.008 00:26:05 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:18.266 [2024-07-13 00:26:05.421212] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:18.266 [2024-07-13 00:26:05.421523] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.266 00:26:05 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:18.525 malloc0 00:17:18.525 00:26:05 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:18.787 00:26:05 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.061 00:26:06 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.061 00:26:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:19.061 00:26:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:19.061 00:26:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:19.061 00:26:06 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:19.061 00:26:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.061 00:26:06 -- target/tls.sh@28 -- # bdevperf_pid=88644 00:17:19.061 00:26:06 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.061 00:26:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.061 00:26:06 -- target/tls.sh@31 -- # waitforlisten 88644 /var/tmp/bdevperf.sock 00:17:19.061 00:26:06 -- common/autotest_common.sh@819 -- # '[' -z 88644 ']' 00:17:19.061 00:26:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.061 00:26:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:19.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.061 00:26:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.061 00:26:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:19.061 00:26:06 -- common/autotest_common.sh@10 -- # set +x 00:17:19.061 [2024-07-13 00:26:06.254251] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:19.061 [2024-07-13 00:26:06.254370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88644 ] 00:17:19.334 [2024-07-13 00:26:06.397585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.334 [2024-07-13 00:26:06.520631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.267 00:26:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:20.267 00:26:07 -- common/autotest_common.sh@852 -- # return 0 00:17:20.267 00:26:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:20.268 [2024-07-13 00:26:07.415213] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.268 TLSTESTn1 00:17:20.527 00:26:07 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:20.527 Running I/O for 10 seconds... 00:17:30.488 00:17:30.488 Latency(us) 00:17:30.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.488 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:30.488 Verification LBA range: start 0x0 length 0x2000 00:17:30.488 TLSTESTn1 : 10.02 5872.50 22.94 0.00 0.00 21758.77 4617.31 24903.68 00:17:30.488 =================================================================================================================== 00:17:30.488 Total : 5872.50 22.94 0.00 0.00 21758.77 4617.31 24903.68 00:17:30.488 0 00:17:30.488 00:26:17 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:30.488 00:26:17 -- target/tls.sh@45 -- # killprocess 88644 00:17:30.488 00:26:17 -- common/autotest_common.sh@926 -- # '[' -z 88644 ']' 00:17:30.488 00:26:17 -- common/autotest_common.sh@930 -- # kill -0 88644 00:17:30.488 00:26:17 -- common/autotest_common.sh@931 -- # uname 00:17:30.488 00:26:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:30.488 00:26:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88644 00:17:30.488 00:26:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:30.488 00:26:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:30.488 00:26:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88644' 00:17:30.488 killing process with pid 88644 00:17:30.488 00:26:17 -- common/autotest_common.sh@945 -- # kill 88644 00:17:30.488 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.488 00:17:30.488 Latency(us) 00:17:30.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.488 =================================================================================================================== 00:17:30.488 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.488 00:26:17 -- common/autotest_common.sh@950 -- # wait 88644 00:17:31.054 00:26:17 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:31.054 00:26:17 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:31.054 00:26:17 -- common/autotest_common.sh@640 -- # local es=0 00:17:31.054 00:26:17 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:31.054 00:26:17 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:31.054 00:26:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:31.054 00:26:17 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:31.054 00:26:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:31.055 00:26:17 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:31.055 00:26:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:31.055 00:26:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:31.055 00:26:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:31.055 00:26:17 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:31.055 00:26:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.055 00:26:17 -- target/tls.sh@28 -- # bdevperf_pid=88802 00:17:31.055 00:26:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.055 00:26:17 -- target/tls.sh@31 -- # waitforlisten 88802 /var/tmp/bdevperf.sock 00:17:31.055 00:26:17 -- common/autotest_common.sh@819 -- # '[' -z 88802 ']' 00:17:31.055 00:26:17 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.055 00:26:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.055 00:26:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:31.055 00:26:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.055 00:26:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:31.055 00:26:17 -- common/autotest_common.sh@10 -- # set +x 00:17:31.055 [2024-07-13 00:26:18.049823] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:31.055 [2024-07-13 00:26:18.049960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88802 ] 00:17:31.055 [2024-07-13 00:26:18.188951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.313 [2024-07-13 00:26:18.310039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.879 00:26:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:31.879 00:26:18 -- common/autotest_common.sh@852 -- # return 0 00:17:31.879 00:26:18 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:32.138 [2024-07-13 00:26:19.233011] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.138 [2024-07-13 00:26:19.233073] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:32.138 2024/07/13 00:26:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:32.138 request: 00:17:32.138 { 00:17:32.138 "method": "bdev_nvme_attach_controller", 00:17:32.138 "params": { 00:17:32.138 "name": "TLSTEST", 00:17:32.138 "trtype": "tcp", 00:17:32.138 "traddr": "10.0.0.2", 00:17:32.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:32.138 "adrfam": "ipv4", 00:17:32.138 "trsvcid": "4420", 00:17:32.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.138 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:32.138 } 00:17:32.138 } 00:17:32.138 Got JSON-RPC error response 00:17:32.138 GoRPCClient: error on JSON-RPC call 00:17:32.138 00:26:19 -- target/tls.sh@36 -- # killprocess 88802 00:17:32.138 00:26:19 -- common/autotest_common.sh@926 -- # '[' -z 88802 ']' 00:17:32.138 00:26:19 -- common/autotest_common.sh@930 -- # kill -0 88802 00:17:32.138 00:26:19 -- common/autotest_common.sh@931 -- # uname 00:17:32.138 00:26:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.138 00:26:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88802 00:17:32.138 00:26:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:32.138 00:26:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:32.138 killing process with pid 88802 00:17:32.138 00:26:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88802' 00:17:32.138 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.138 00:17:32.138 Latency(us) 00:17:32.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.138 =================================================================================================================== 00:17:32.138 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:32.138 00:26:19 -- common/autotest_common.sh@945 -- # kill 88802 00:17:32.138 00:26:19 -- common/autotest_common.sh@950 -- # wait 88802 00:17:32.396 00:26:19 -- target/tls.sh@37 -- # return 1 00:17:32.396 00:26:19 -- common/autotest_common.sh@643 -- # es=1 00:17:32.396 00:26:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:32.396 00:26:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:32.396 00:26:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:32.396 00:26:19 -- target/tls.sh@183 -- # killprocess 88547 00:17:32.396 00:26:19 -- common/autotest_common.sh@926 -- # '[' -z 88547 ']' 00:17:32.396 00:26:19 -- common/autotest_common.sh@930 -- # kill -0 88547 00:17:32.396 00:26:19 -- common/autotest_common.sh@931 -- # uname 00:17:32.396 00:26:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.396 00:26:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88547 00:17:32.396 00:26:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:32.396 00:26:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:32.396 killing process with pid 88547 00:17:32.396 00:26:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88547' 00:17:32.396 00:26:19 -- common/autotest_common.sh@945 -- # kill 88547 00:17:32.396 00:26:19 -- common/autotest_common.sh@950 -- # wait 88547 00:17:32.654 00:26:19 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:32.654 00:26:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:32.654 00:26:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:32.654 00:26:19 -- common/autotest_common.sh@10 -- # set +x 00:17:32.654 00:26:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:32.654 00:26:19 -- nvmf/common.sh@469 -- # nvmfpid=88853 00:17:32.654 00:26:19 -- nvmf/common.sh@470 -- # waitforlisten 88853 00:17:32.654 00:26:19 -- common/autotest_common.sh@819 -- # '[' -z 88853 ']' 00:17:32.654 00:26:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.654 00:26:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:32.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.654 00:26:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.654 00:26:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:32.654 00:26:19 -- common/autotest_common.sh@10 -- # set +x 00:17:32.911 [2024-07-13 00:26:19.903949] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:32.911 [2024-07-13 00:26:19.904035] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.911 [2024-07-13 00:26:20.037733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.169 [2024-07-13 00:26:20.141219] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:33.169 [2024-07-13 00:26:20.141377] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.169 [2024-07-13 00:26:20.141390] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.169 [2024-07-13 00:26:20.141399] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.169 [2024-07-13 00:26:20.141435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.735 00:26:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:33.735 00:26:20 -- common/autotest_common.sh@852 -- # return 0 00:17:33.735 00:26:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:33.735 00:26:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:33.735 00:26:20 -- common/autotest_common.sh@10 -- # set +x 00:17:33.735 00:26:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.735 00:26:20 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.735 00:26:20 -- common/autotest_common.sh@640 -- # local es=0 00:17:33.735 00:26:20 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.735 00:26:20 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:33.735 00:26:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:33.735 00:26:20 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:33.735 00:26:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:33.735 00:26:20 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.735 00:26:20 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.735 00:26:20 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:33.993 [2024-07-13 00:26:21.123007] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.993 00:26:21 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:34.251 00:26:21 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:34.510 [2024-07-13 00:26:21.603088] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:34.510 [2024-07-13 00:26:21.603353] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.510 00:26:21 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:34.767 malloc0 00:17:34.768 00:26:21 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:35.026 00:26:22 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:35.284 [2024-07-13 00:26:22.337221] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:35.284 [2024-07-13 00:26:22.337268] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:35.284 [2024-07-13 00:26:22.337297] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:35.284 2024/07/13 00:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:35.284 request: 00:17:35.284 { 00:17:35.284 "method": "nvmf_subsystem_add_host", 00:17:35.284 "params": { 00:17:35.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.284 "host": "nqn.2016-06.io.spdk:host1", 00:17:35.285 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:35.285 } 00:17:35.285 } 00:17:35.285 Got JSON-RPC error response 00:17:35.285 GoRPCClient: error on JSON-RPC call 00:17:35.285 00:26:22 -- common/autotest_common.sh@643 -- # es=1 00:17:35.285 00:26:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:35.285 00:26:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:35.285 00:26:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:35.285 00:26:22 -- target/tls.sh@189 -- # killprocess 88853 00:17:35.285 00:26:22 -- common/autotest_common.sh@926 -- # '[' -z 88853 ']' 00:17:35.285 00:26:22 -- common/autotest_common.sh@930 -- # kill -0 88853 00:17:35.285 00:26:22 -- common/autotest_common.sh@931 -- # uname 00:17:35.285 00:26:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:35.285 00:26:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88853 00:17:35.285 00:26:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:35.285 00:26:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:35.285 killing process with pid 88853 00:17:35.285 00:26:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88853' 00:17:35.285 00:26:22 -- common/autotest_common.sh@945 -- # kill 88853 00:17:35.285 00:26:22 -- common/autotest_common.sh@950 -- # wait 88853 00:17:35.543 00:26:22 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:35.543 00:26:22 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:35.543 00:26:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:35.543 00:26:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:35.543 00:26:22 -- common/autotest_common.sh@10 -- # set +x 00:17:35.543 00:26:22 -- nvmf/common.sh@469 -- # nvmfpid=88963 00:17:35.543 00:26:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:35.543 00:26:22 -- nvmf/common.sh@470 -- # waitforlisten 88963 00:17:35.543 00:26:22 -- common/autotest_common.sh@819 -- # '[' -z 88963 ']' 00:17:35.543 00:26:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.543 00:26:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:35.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.543 00:26:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.543 00:26:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:35.543 00:26:22 -- common/autotest_common.sh@10 -- # set +x 00:17:35.543 [2024-07-13 00:26:22.744452] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:35.543 [2024-07-13 00:26:22.744631] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.802 [2024-07-13 00:26:22.883764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.802 [2024-07-13 00:26:22.990864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:35.802 [2024-07-13 00:26:22.991012] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.802 [2024-07-13 00:26:22.991025] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.802 [2024-07-13 00:26:22.991033] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.802 [2024-07-13 00:26:22.991067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.736 00:26:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:36.736 00:26:23 -- common/autotest_common.sh@852 -- # return 0 00:17:36.736 00:26:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:36.736 00:26:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:36.736 00:26:23 -- common/autotest_common.sh@10 -- # set +x 00:17:36.736 00:26:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.736 00:26:23 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.736 00:26:23 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.736 00:26:23 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:36.736 [2024-07-13 00:26:23.906486] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.736 00:26:23 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:36.993 00:26:24 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:37.250 [2024-07-13 00:26:24.314529] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:37.250 [2024-07-13 00:26:24.314857] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.250 00:26:24 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:37.508 malloc0 00:17:37.508 00:26:24 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:37.767 00:26:24 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.025 00:26:25 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.025 00:26:25 -- target/tls.sh@197 -- # bdevperf_pid=89066 00:17:38.025 00:26:25 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.025 00:26:25 -- target/tls.sh@200 -- # waitforlisten 89066 /var/tmp/bdevperf.sock 00:17:38.025 00:26:25 -- common/autotest_common.sh@819 -- # '[' -z 89066 ']' 00:17:38.025 00:26:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.025 00:26:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:38.025 00:26:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.025 00:26:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:38.025 00:26:25 -- common/autotest_common.sh@10 -- # set +x 00:17:38.025 [2024-07-13 00:26:25.055201] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:38.025 [2024-07-13 00:26:25.055306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89066 ] 00:17:38.025 [2024-07-13 00:26:25.191031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.283 [2024-07-13 00:26:25.290079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.850 00:26:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:38.850 00:26:25 -- common/autotest_common.sh@852 -- # return 0 00:17:38.850 00:26:25 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.108 [2024-07-13 00:26:26.172165] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:39.108 TLSTESTn1 00:17:39.108 00:26:26 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:39.367 00:26:26 -- target/tls.sh@205 -- # tgtconf='{ 00:17:39.367 "subsystems": [ 00:17:39.367 { 00:17:39.367 "subsystem": "iobuf", 00:17:39.367 "config": [ 00:17:39.367 { 00:17:39.367 "method": "iobuf_set_options", 00:17:39.367 "params": { 00:17:39.367 "large_bufsize": 135168, 00:17:39.367 "large_pool_count": 1024, 00:17:39.367 "small_bufsize": 8192, 00:17:39.367 "small_pool_count": 8192 00:17:39.367 } 00:17:39.367 } 00:17:39.367 ] 00:17:39.367 }, 00:17:39.367 { 00:17:39.367 "subsystem": "sock", 00:17:39.367 "config": [ 00:17:39.367 { 00:17:39.367 "method": "sock_impl_set_options", 00:17:39.367 "params": { 00:17:39.367 "enable_ktls": false, 00:17:39.367 "enable_placement_id": 0, 00:17:39.367 "enable_quickack": false, 00:17:39.367 "enable_recv_pipe": true, 00:17:39.367 "enable_zerocopy_send_client": false, 00:17:39.367 "enable_zerocopy_send_server": true, 00:17:39.367 "impl_name": "posix", 00:17:39.367 "recv_buf_size": 2097152, 00:17:39.367 "send_buf_size": 2097152, 00:17:39.367 "tls_version": 0, 00:17:39.367 "zerocopy_threshold": 0 00:17:39.367 } 00:17:39.367 }, 00:17:39.367 { 00:17:39.367 "method": "sock_impl_set_options", 00:17:39.367 "params": { 00:17:39.367 "enable_ktls": false, 00:17:39.367 "enable_placement_id": 0, 00:17:39.367 "enable_quickack": false, 00:17:39.367 "enable_recv_pipe": true, 00:17:39.367 "enable_zerocopy_send_client": false, 00:17:39.367 "enable_zerocopy_send_server": true, 00:17:39.367 "impl_name": "ssl", 00:17:39.367 "recv_buf_size": 4096, 00:17:39.367 "send_buf_size": 4096, 00:17:39.367 "tls_version": 0, 00:17:39.367 "zerocopy_threshold": 0 00:17:39.367 } 00:17:39.367 } 00:17:39.367 ] 00:17:39.367 }, 00:17:39.367 { 00:17:39.367 "subsystem": "vmd", 00:17:39.367 "config": [] 00:17:39.367 }, 00:17:39.367 { 00:17:39.367 "subsystem": "accel", 00:17:39.367 "config": [ 00:17:39.367 { 00:17:39.367 "method": "accel_set_options", 00:17:39.367 "params": { 00:17:39.367 "buf_count": 2048, 00:17:39.367 "large_cache_size": 16, 00:17:39.367 "sequence_count": 2048, 00:17:39.367 "small_cache_size": 128, 00:17:39.367 "task_count": 2048 00:17:39.367 } 00:17:39.367 } 00:17:39.367 ] 00:17:39.367 }, 00:17:39.367 { 00:17:39.367 "subsystem": "bdev", 00:17:39.367 "config": [ 00:17:39.367 { 00:17:39.367 "method": "bdev_set_options", 00:17:39.367 "params": { 00:17:39.367 "bdev_auto_examine": true, 00:17:39.367 "bdev_io_cache_size": 256, 00:17:39.367 "bdev_io_pool_size": 65535, 00:17:39.367 "iobuf_large_cache_size": 16, 00:17:39.367 "iobuf_small_cache_size": 128 00:17:39.367 } 00:17:39.367 }, 00:17:39.367 { 00:17:39.367 "method": "bdev_raid_set_options", 00:17:39.367 "params": { 00:17:39.367 "process_window_size_kb": 1024 00:17:39.367 } 00:17:39.367 }, 00:17:39.367 { 00:17:39.367 "method": "bdev_iscsi_set_options", 00:17:39.367 "params": { 00:17:39.367 "timeout_sec": 30 00:17:39.367 } 00:17:39.367 }, 00:17:39.367 { 00:17:39.367 "method": "bdev_nvme_set_options", 00:17:39.367 "params": { 00:17:39.367 "action_on_timeout": "none", 00:17:39.367 "allow_accel_sequence": false, 00:17:39.367 "arbitration_burst": 0, 00:17:39.367 "bdev_retry_count": 3, 00:17:39.367 "ctrlr_loss_timeout_sec": 0, 00:17:39.367 "delay_cmd_submit": true, 00:17:39.367 "fast_io_fail_timeout_sec": 0, 00:17:39.367 "generate_uuids": false, 00:17:39.367 "high_priority_weight": 0, 00:17:39.367 "io_path_stat": false, 00:17:39.367 "io_queue_requests": 0, 00:17:39.367 "keep_alive_timeout_ms": 10000, 00:17:39.367 "low_priority_weight": 0, 00:17:39.367 "medium_priority_weight": 0, 00:17:39.367 "nvme_adminq_poll_period_us": 10000, 00:17:39.367 "nvme_ioq_poll_period_us": 0, 00:17:39.367 "reconnect_delay_sec": 0, 00:17:39.367 "timeout_admin_us": 0, 00:17:39.367 "timeout_us": 0, 00:17:39.367 "transport_ack_timeout": 0, 00:17:39.367 "transport_retry_count": 4, 00:17:39.367 "transport_tos": 0 00:17:39.367 } 00:17:39.367 }, 00:17:39.367 { 00:17:39.367 "method": "bdev_nvme_set_hotplug", 00:17:39.367 "params": { 00:17:39.367 "enable": false, 00:17:39.367 "period_us": 100000 00:17:39.367 } 00:17:39.367 }, 00:17:39.367 { 00:17:39.368 "method": "bdev_malloc_create", 00:17:39.368 "params": { 00:17:39.368 "block_size": 4096, 00:17:39.368 "name": "malloc0", 00:17:39.368 "num_blocks": 8192, 00:17:39.368 "optimal_io_boundary": 0, 00:17:39.368 "physical_block_size": 4096, 00:17:39.368 "uuid": "88473e89-2987-4b50-a2da-1972ebc15d78" 00:17:39.368 } 00:17:39.368 }, 00:17:39.368 { 00:17:39.368 "method": "bdev_wait_for_examine" 00:17:39.368 } 00:17:39.368 ] 00:17:39.368 }, 00:17:39.368 { 00:17:39.368 "subsystem": "nbd", 00:17:39.368 "config": [] 00:17:39.368 }, 00:17:39.368 { 00:17:39.368 "subsystem": "scheduler", 00:17:39.368 "config": [ 00:17:39.368 { 00:17:39.368 "method": "framework_set_scheduler", 00:17:39.368 "params": { 00:17:39.368 "name": "static" 00:17:39.368 } 00:17:39.368 } 00:17:39.368 ] 00:17:39.368 }, 00:17:39.368 { 00:17:39.368 "subsystem": "nvmf", 00:17:39.368 "config": [ 00:17:39.368 { 00:17:39.368 "method": "nvmf_set_config", 00:17:39.368 "params": { 00:17:39.368 "admin_cmd_passthru": { 00:17:39.368 "identify_ctrlr": false 00:17:39.368 }, 00:17:39.368 "discovery_filter": "match_any" 00:17:39.368 } 00:17:39.368 }, 00:17:39.368 { 00:17:39.368 "method": "nvmf_set_max_subsystems", 00:17:39.368 "params": { 00:17:39.368 "max_subsystems": 1024 00:17:39.368 } 00:17:39.368 }, 00:17:39.368 { 00:17:39.368 "method": "nvmf_set_crdt", 00:17:39.368 "params": { 00:17:39.368 "crdt1": 0, 00:17:39.368 "crdt2": 0, 00:17:39.368 "crdt3": 0 00:17:39.368 } 00:17:39.368 }, 00:17:39.368 { 00:17:39.368 "method": "nvmf_create_transport", 00:17:39.368 "params": { 00:17:39.368 "abort_timeout_sec": 1, 00:17:39.368 "buf_cache_size": 4294967295, 00:17:39.368 "c2h_success": false, 00:17:39.368 "dif_insert_or_strip": false, 00:17:39.368 "in_capsule_data_size": 4096, 00:17:39.368 "io_unit_size": 131072, 00:17:39.368 "max_aq_depth": 128, 00:17:39.368 "max_io_qpairs_per_ctrlr": 127, 00:17:39.368 "max_io_size": 131072, 00:17:39.368 "max_queue_depth": 128, 00:17:39.368 "num_shared_buffers": 511, 00:17:39.368 "sock_priority": 0, 00:17:39.368 "trtype": "TCP", 00:17:39.368 "zcopy": false 00:17:39.368 } 00:17:39.368 }, 00:17:39.368 { 00:17:39.368 "method": "nvmf_create_subsystem", 00:17:39.368 "params": { 00:17:39.368 "allow_any_host": false, 00:17:39.368 "ana_reporting": false, 00:17:39.368 "max_cntlid": 65519, 00:17:39.368 "max_namespaces": 10, 00:17:39.368 "min_cntlid": 1, 00:17:39.368 "model_number": "SPDK bdev Controller", 00:17:39.368 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.368 "serial_number": "SPDK00000000000001" 00:17:39.368 } 00:17:39.368 }, 00:17:39.368 { 00:17:39.368 "method": "nvmf_subsystem_add_host", 00:17:39.368 "params": { 00:17:39.368 "host": "nqn.2016-06.io.spdk:host1", 00:17:39.368 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.368 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:39.368 } 00:17:39.368 }, 00:17:39.368 { 00:17:39.368 "method": "nvmf_subsystem_add_ns", 00:17:39.368 "params": { 00:17:39.368 "namespace": { 00:17:39.368 "bdev_name": "malloc0", 00:17:39.368 "nguid": "88473E8929874B50A2DA1972EBC15D78", 00:17:39.368 "nsid": 1, 00:17:39.368 "uuid": "88473e89-2987-4b50-a2da-1972ebc15d78" 00:17:39.368 }, 00:17:39.368 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:39.368 } 00:17:39.368 }, 00:17:39.368 { 00:17:39.368 "method": "nvmf_subsystem_add_listener", 00:17:39.368 "params": { 00:17:39.368 "listen_address": { 00:17:39.368 "adrfam": "IPv4", 00:17:39.368 "traddr": "10.0.0.2", 00:17:39.368 "trsvcid": "4420", 00:17:39.368 "trtype": "TCP" 00:17:39.368 }, 00:17:39.368 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.368 "secure_channel": true 00:17:39.368 } 00:17:39.368 } 00:17:39.368 ] 00:17:39.368 } 00:17:39.368 ] 00:17:39.368 }' 00:17:39.368 00:26:26 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:39.627 00:26:26 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:39.627 "subsystems": [ 00:17:39.627 { 00:17:39.627 "subsystem": "iobuf", 00:17:39.627 "config": [ 00:17:39.627 { 00:17:39.627 "method": "iobuf_set_options", 00:17:39.627 "params": { 00:17:39.627 "large_bufsize": 135168, 00:17:39.627 "large_pool_count": 1024, 00:17:39.627 "small_bufsize": 8192, 00:17:39.627 "small_pool_count": 8192 00:17:39.627 } 00:17:39.627 } 00:17:39.627 ] 00:17:39.627 }, 00:17:39.627 { 00:17:39.627 "subsystem": "sock", 00:17:39.627 "config": [ 00:17:39.627 { 00:17:39.627 "method": "sock_impl_set_options", 00:17:39.627 "params": { 00:17:39.627 "enable_ktls": false, 00:17:39.627 "enable_placement_id": 0, 00:17:39.627 "enable_quickack": false, 00:17:39.627 "enable_recv_pipe": true, 00:17:39.627 "enable_zerocopy_send_client": false, 00:17:39.627 "enable_zerocopy_send_server": true, 00:17:39.627 "impl_name": "posix", 00:17:39.627 "recv_buf_size": 2097152, 00:17:39.627 "send_buf_size": 2097152, 00:17:39.627 "tls_version": 0, 00:17:39.627 "zerocopy_threshold": 0 00:17:39.627 } 00:17:39.627 }, 00:17:39.627 { 00:17:39.627 "method": "sock_impl_set_options", 00:17:39.627 "params": { 00:17:39.627 "enable_ktls": false, 00:17:39.627 "enable_placement_id": 0, 00:17:39.627 "enable_quickack": false, 00:17:39.627 "enable_recv_pipe": true, 00:17:39.627 "enable_zerocopy_send_client": false, 00:17:39.627 "enable_zerocopy_send_server": true, 00:17:39.627 "impl_name": "ssl", 00:17:39.627 "recv_buf_size": 4096, 00:17:39.627 "send_buf_size": 4096, 00:17:39.627 "tls_version": 0, 00:17:39.627 "zerocopy_threshold": 0 00:17:39.627 } 00:17:39.627 } 00:17:39.627 ] 00:17:39.627 }, 00:17:39.627 { 00:17:39.627 "subsystem": "vmd", 00:17:39.627 "config": [] 00:17:39.627 }, 00:17:39.627 { 00:17:39.627 "subsystem": "accel", 00:17:39.627 "config": [ 00:17:39.627 { 00:17:39.627 "method": "accel_set_options", 00:17:39.627 "params": { 00:17:39.627 "buf_count": 2048, 00:17:39.627 "large_cache_size": 16, 00:17:39.627 "sequence_count": 2048, 00:17:39.627 "small_cache_size": 128, 00:17:39.627 "task_count": 2048 00:17:39.627 } 00:17:39.627 } 00:17:39.627 ] 00:17:39.627 }, 00:17:39.627 { 00:17:39.627 "subsystem": "bdev", 00:17:39.627 "config": [ 00:17:39.627 { 00:17:39.627 "method": "bdev_set_options", 00:17:39.627 "params": { 00:17:39.627 "bdev_auto_examine": true, 00:17:39.627 "bdev_io_cache_size": 256, 00:17:39.627 "bdev_io_pool_size": 65535, 00:17:39.627 "iobuf_large_cache_size": 16, 00:17:39.627 "iobuf_small_cache_size": 128 00:17:39.627 } 00:17:39.627 }, 00:17:39.627 { 00:17:39.627 "method": "bdev_raid_set_options", 00:17:39.627 "params": { 00:17:39.627 "process_window_size_kb": 1024 00:17:39.627 } 00:17:39.627 }, 00:17:39.627 { 00:17:39.627 "method": "bdev_iscsi_set_options", 00:17:39.627 "params": { 00:17:39.627 "timeout_sec": 30 00:17:39.627 } 00:17:39.627 }, 00:17:39.627 { 00:17:39.627 "method": "bdev_nvme_set_options", 00:17:39.627 "params": { 00:17:39.627 "action_on_timeout": "none", 00:17:39.627 "allow_accel_sequence": false, 00:17:39.627 "arbitration_burst": 0, 00:17:39.627 "bdev_retry_count": 3, 00:17:39.627 "ctrlr_loss_timeout_sec": 0, 00:17:39.627 "delay_cmd_submit": true, 00:17:39.627 "fast_io_fail_timeout_sec": 0, 00:17:39.627 "generate_uuids": false, 00:17:39.627 "high_priority_weight": 0, 00:17:39.627 "io_path_stat": false, 00:17:39.627 "io_queue_requests": 512, 00:17:39.627 "keep_alive_timeout_ms": 10000, 00:17:39.627 "low_priority_weight": 0, 00:17:39.627 "medium_priority_weight": 0, 00:17:39.628 "nvme_adminq_poll_period_us": 10000, 00:17:39.628 "nvme_ioq_poll_period_us": 0, 00:17:39.628 "reconnect_delay_sec": 0, 00:17:39.628 "timeout_admin_us": 0, 00:17:39.628 "timeout_us": 0, 00:17:39.628 "transport_ack_timeout": 0, 00:17:39.628 "transport_retry_count": 4, 00:17:39.628 "transport_tos": 0 00:17:39.628 } 00:17:39.628 }, 00:17:39.628 { 00:17:39.628 "method": "bdev_nvme_attach_controller", 00:17:39.628 "params": { 00:17:39.628 "adrfam": "IPv4", 00:17:39.628 "ctrlr_loss_timeout_sec": 0, 00:17:39.628 "ddgst": false, 00:17:39.628 "fast_io_fail_timeout_sec": 0, 00:17:39.628 "hdgst": false, 00:17:39.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.628 "name": "TLSTEST", 00:17:39.628 "prchk_guard": false, 00:17:39.628 "prchk_reftag": false, 00:17:39.628 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:39.628 "reconnect_delay_sec": 0, 00:17:39.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.628 "traddr": "10.0.0.2", 00:17:39.628 "trsvcid": "4420", 00:17:39.628 "trtype": "TCP" 00:17:39.628 } 00:17:39.628 }, 00:17:39.628 { 00:17:39.628 "method": "bdev_nvme_set_hotplug", 00:17:39.628 "params": { 00:17:39.628 "enable": false, 00:17:39.628 "period_us": 100000 00:17:39.628 } 00:17:39.628 }, 00:17:39.628 { 00:17:39.628 "method": "bdev_wait_for_examine" 00:17:39.628 } 00:17:39.628 ] 00:17:39.628 }, 00:17:39.628 { 00:17:39.628 "subsystem": "nbd", 00:17:39.628 "config": [] 00:17:39.628 } 00:17:39.628 ] 00:17:39.628 }' 00:17:39.628 00:26:26 -- target/tls.sh@208 -- # killprocess 89066 00:17:39.628 00:26:26 -- common/autotest_common.sh@926 -- # '[' -z 89066 ']' 00:17:39.628 00:26:26 -- common/autotest_common.sh@930 -- # kill -0 89066 00:17:39.628 00:26:26 -- common/autotest_common.sh@931 -- # uname 00:17:39.628 00:26:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.628 00:26:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89066 00:17:39.628 00:26:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:39.628 00:26:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:39.628 killing process with pid 89066 00:17:39.628 00:26:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89066' 00:17:39.628 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.628 00:17:39.628 Latency(us) 00:17:39.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.628 =================================================================================================================== 00:17:39.628 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:39.628 00:26:26 -- common/autotest_common.sh@945 -- # kill 89066 00:17:39.628 00:26:26 -- common/autotest_common.sh@950 -- # wait 89066 00:17:39.886 00:26:27 -- target/tls.sh@209 -- # killprocess 88963 00:17:39.886 00:26:27 -- common/autotest_common.sh@926 -- # '[' -z 88963 ']' 00:17:39.886 00:26:27 -- common/autotest_common.sh@930 -- # kill -0 88963 00:17:39.886 00:26:27 -- common/autotest_common.sh@931 -- # uname 00:17:39.886 00:26:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.886 00:26:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88963 00:17:39.886 00:26:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:39.886 00:26:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:39.886 killing process with pid 88963 00:17:39.886 00:26:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88963' 00:17:39.886 00:26:27 -- common/autotest_common.sh@945 -- # kill 88963 00:17:39.886 00:26:27 -- common/autotest_common.sh@950 -- # wait 88963 00:17:40.144 00:26:27 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:40.144 00:26:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:40.144 00:26:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:40.144 00:26:27 -- target/tls.sh@212 -- # echo '{ 00:17:40.144 "subsystems": [ 00:17:40.144 { 00:17:40.144 "subsystem": "iobuf", 00:17:40.144 "config": [ 00:17:40.144 { 00:17:40.144 "method": "iobuf_set_options", 00:17:40.144 "params": { 00:17:40.144 "large_bufsize": 135168, 00:17:40.144 "large_pool_count": 1024, 00:17:40.144 "small_bufsize": 8192, 00:17:40.144 "small_pool_count": 8192 00:17:40.144 } 00:17:40.144 } 00:17:40.144 ] 00:17:40.144 }, 00:17:40.144 { 00:17:40.144 "subsystem": "sock", 00:17:40.144 "config": [ 00:17:40.144 { 00:17:40.144 "method": "sock_impl_set_options", 00:17:40.144 "params": { 00:17:40.144 "enable_ktls": false, 00:17:40.144 "enable_placement_id": 0, 00:17:40.145 "enable_quickack": false, 00:17:40.145 "enable_recv_pipe": true, 00:17:40.145 "enable_zerocopy_send_client": false, 00:17:40.145 "enable_zerocopy_send_server": true, 00:17:40.145 "impl_name": "posix", 00:17:40.145 "recv_buf_size": 2097152, 00:17:40.145 "send_buf_size": 2097152, 00:17:40.145 "tls_version": 0, 00:17:40.145 "zerocopy_threshold": 0 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "sock_impl_set_options", 00:17:40.145 "params": { 00:17:40.145 "enable_ktls": false, 00:17:40.145 "enable_placement_id": 0, 00:17:40.145 "enable_quickack": false, 00:17:40.145 "enable_recv_pipe": true, 00:17:40.145 "enable_zerocopy_send_client": false, 00:17:40.145 "enable_zerocopy_send_server": true, 00:17:40.145 "impl_name": "ssl", 00:17:40.145 "recv_buf_size": 4096, 00:17:40.145 "send_buf_size": 4096, 00:17:40.145 "tls_version": 0, 00:17:40.145 "zerocopy_threshold": 0 00:17:40.145 } 00:17:40.145 } 00:17:40.145 ] 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "subsystem": "vmd", 00:17:40.145 "config": [] 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "subsystem": "accel", 00:17:40.145 "config": [ 00:17:40.145 { 00:17:40.145 "method": "accel_set_options", 00:17:40.145 "params": { 00:17:40.145 "buf_count": 2048, 00:17:40.145 "large_cache_size": 16, 00:17:40.145 "sequence_count": 2048, 00:17:40.145 "small_cache_size": 128, 00:17:40.145 "task_count": 2048 00:17:40.145 } 00:17:40.145 } 00:17:40.145 ] 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "subsystem": "bdev", 00:17:40.145 "config": [ 00:17:40.145 { 00:17:40.145 "method": "bdev_set_options", 00:17:40.145 "params": { 00:17:40.145 "bdev_auto_examine": true, 00:17:40.145 "bdev_io_cache_size": 256, 00:17:40.145 "bdev_io_pool_size": 65535, 00:17:40.145 "iobuf_large_cache_size": 16, 00:17:40.145 "iobuf_small_cache_size": 128 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "bdev_raid_set_options", 00:17:40.145 "params": { 00:17:40.145 "process_window_size_kb": 1024 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "bdev_iscsi_set_options", 00:17:40.145 "params": { 00:17:40.145 "timeout_sec": 30 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "bdev_nvme_set_options", 00:17:40.145 "params": { 00:17:40.145 "action_on_timeout": "none", 00:17:40.145 "allow_accel_sequence": false, 00:17:40.145 "arbitration_burst": 0, 00:17:40.145 "bdev_retry_count": 3, 00:17:40.145 "ctrlr_loss_timeout_sec": 0, 00:17:40.145 "delay_cmd_submit": true, 00:17:40.145 "fast_io_fail_timeout_sec": 0, 00:17:40.145 "generate_uuids": false, 00:17:40.145 "high_priority_weight": 0, 00:17:40.145 "io_path_stat": false, 00:17:40.145 "io_queue_requests": 0, 00:17:40.145 "keep_alive_timeout_ms": 10000, 00:17:40.145 "low_priority_weight": 0, 00:17:40.145 "medium_priority_weight": 0, 00:17:40.145 "nvme_adminq_poll_period_us": 10000, 00:17:40.145 "nvme_ioq_poll_period_us": 0, 00:17:40.145 "reconnect_delay_sec": 0, 00:17:40.145 "timeout_admin_us": 0, 00:17:40.145 "timeout_us": 0, 00:17:40.145 "transport_ack_timeout": 0, 00:17:40.145 "transport_retry_count": 4, 00:17:40.145 "transport_tos": 0 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "bdev_nvme_set_hotplug", 00:17:40.145 "params": { 00:17:40.145 "enable": false, 00:17:40.145 "period_us": 100000 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "bdev_malloc_create", 00:17:40.145 "params": { 00:17:40.145 "block_size": 4096, 00:17:40.145 "name": "malloc0", 00:17:40.145 "num_blocks": 8192, 00:17:40.145 "optimal_io_boundary": 0, 00:17:40.145 "physical_block_size": 4096, 00:17:40.145 "uuid": "88473e89-2987-4b50-a2da-1972ebc15d78" 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "bdev_wait_for_examine" 00:17:40.145 } 00:17:40.145 ] 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "subsystem": "nbd", 00:17:40.145 "config": [] 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "subsystem": "scheduler", 00:17:40.145 "config": [ 00:17:40.145 { 00:17:40.145 "method": "framework_set_scheduler", 00:17:40.145 "params": { 00:17:40.145 "name": "static" 00:17:40.145 } 00:17:40.145 } 00:17:40.145 ] 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "subsystem": "nvmf", 00:17:40.145 "config": [ 00:17:40.145 { 00:17:40.145 "method": "nvmf_set_config", 00:17:40.145 "params": { 00:17:40.145 "admin_cmd_passthru": { 00:17:40.145 "identify_ctrlr": false 00:17:40.145 }, 00:17:40.145 "discovery_filter": "match_any" 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "nvmf_set_max_subsystems", 00:17:40.145 "params": { 00:17:40.145 "max_subsystems": 1024 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "nvmf_set_crdt", 00:17:40.145 "params": { 00:17:40.145 "crdt1": 0, 00:17:40.145 "crdt2": 0, 00:17:40.145 "crdt3": 0 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "nvmf_create_transport", 00:17:40.145 "params": { 00:17:40.145 "abort_timeout_sec": 1, 00:17:40.145 "buf_cache_size": 4294967295, 00:17:40.145 "c2h_success": false, 00:17:40.145 "dif_insert_or_strip": false, 00:17:40.145 "in_capsule_data_size": 4096, 00:17:40.145 "io_unit_size": 131072, 00:17:40.145 "max_aq_depth": 128, 00:17:40.145 "max_io_qpairs_per_ctrlr": 127, 00:17:40.145 "max_io_size": 131072, 00:17:40.145 "max_queue_depth": 128, 00:17:40.145 "num_shared_buffers": 511, 00:17:40.145 "sock_priority": 0, 00:17:40.145 "trtype": "TCP", 00:17:40.145 "zcopy": false 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "nvmf_create_subsystem", 00:17:40.145 "params": { 00:17:40.145 "allow_any_host": false, 00:17:40.145 "ana_reporting": false, 00:17:40.145 "max_cntlid": 65519, 00:17:40.145 "max_namespaces": 10, 00:17:40.145 "min_cntlid": 1, 00:17:40.145 "model_number": "SPDK bdev Controller", 00:17:40.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.145 "serial_number": "SPDK00000000000001" 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "nvmf_subsystem_add_host", 00:17:40.145 "params": { 00:17:40.145 "host": "nqn.2016-06.io.spdk:host1", 00:17:40.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.145 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "nvmf_subsystem_add_ns", 00:17:40.145 "params": { 00:17:40.145 "namespace": { 00:17:40.145 "bdev_name": "malloc0", 00:17:40.145 "nguid": "88473E8929874B50A2DA1972EBC15D78", 00:17:40.145 "nsid": 1, 00:17:40.145 "uuid": "88473e89-2987-4b50-a2da-1972ebc15d78" 00:17:40.145 }, 00:17:40.145 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:40.145 } 00:17:40.145 }, 00:17:40.145 { 00:17:40.145 "method": "nvmf_subsystem_add_listener", 00:17:40.145 "params": { 00:17:40.145 "listen_address": { 00:17:40.145 "adrfam": "IPv4", 00:17:40.145 "traddr": "10.0.0.2", 00:17:40.145 "trsvcid": "4420", 00:17:40.145 "trtype": "TCP" 00:17:40.145 }, 00:17:40.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.145 "secure_channel": true 00:17:40.145 } 00:17:40.145 } 00:17:40.145 ] 00:17:40.145 } 00:17:40.145 ] 00:17:40.145 }' 00:17:40.145 00:26:27 -- common/autotest_common.sh@10 -- # set +x 00:17:40.145 00:26:27 -- nvmf/common.sh@469 -- # nvmfpid=89139 00:17:40.145 00:26:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:40.145 00:26:27 -- nvmf/common.sh@470 -- # waitforlisten 89139 00:17:40.145 00:26:27 -- common/autotest_common.sh@819 -- # '[' -z 89139 ']' 00:17:40.145 00:26:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.145 00:26:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:40.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.145 00:26:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.145 00:26:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:40.145 00:26:27 -- common/autotest_common.sh@10 -- # set +x 00:17:40.405 [2024-07-13 00:26:27.418103] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:40.405 [2024-07-13 00:26:27.418210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.405 [2024-07-13 00:26:27.558991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.663 [2024-07-13 00:26:27.679831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:40.663 [2024-07-13 00:26:27.679978] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.663 [2024-07-13 00:26:27.679990] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.663 [2024-07-13 00:26:27.679999] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.663 [2024-07-13 00:26:27.680028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.921 [2024-07-13 00:26:27.929493] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.921 [2024-07-13 00:26:27.961449] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.921 [2024-07-13 00:26:27.961727] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.190 00:26:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:41.190 00:26:28 -- common/autotest_common.sh@852 -- # return 0 00:17:41.190 00:26:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:41.190 00:26:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:41.190 00:26:28 -- common/autotest_common.sh@10 -- # set +x 00:17:41.190 00:26:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.190 00:26:28 -- target/tls.sh@216 -- # bdevperf_pid=89183 00:17:41.190 00:26:28 -- target/tls.sh@217 -- # waitforlisten 89183 /var/tmp/bdevperf.sock 00:17:41.190 00:26:28 -- common/autotest_common.sh@819 -- # '[' -z 89183 ']' 00:17:41.190 00:26:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.190 00:26:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:41.190 00:26:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.190 00:26:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:41.190 00:26:28 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:41.190 00:26:28 -- common/autotest_common.sh@10 -- # set +x 00:17:41.190 00:26:28 -- target/tls.sh@213 -- # echo '{ 00:17:41.190 "subsystems": [ 00:17:41.190 { 00:17:41.190 "subsystem": "iobuf", 00:17:41.190 "config": [ 00:17:41.190 { 00:17:41.190 "method": "iobuf_set_options", 00:17:41.190 "params": { 00:17:41.190 "large_bufsize": 135168, 00:17:41.190 "large_pool_count": 1024, 00:17:41.190 "small_bufsize": 8192, 00:17:41.190 "small_pool_count": 8192 00:17:41.190 } 00:17:41.190 } 00:17:41.190 ] 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "subsystem": "sock", 00:17:41.190 "config": [ 00:17:41.190 { 00:17:41.190 "method": "sock_impl_set_options", 00:17:41.190 "params": { 00:17:41.190 "enable_ktls": false, 00:17:41.190 "enable_placement_id": 0, 00:17:41.190 "enable_quickack": false, 00:17:41.190 "enable_recv_pipe": true, 00:17:41.190 "enable_zerocopy_send_client": false, 00:17:41.190 "enable_zerocopy_send_server": true, 00:17:41.190 "impl_name": "posix", 00:17:41.190 "recv_buf_size": 2097152, 00:17:41.190 "send_buf_size": 2097152, 00:17:41.190 "tls_version": 0, 00:17:41.190 "zerocopy_threshold": 0 00:17:41.190 } 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "method": "sock_impl_set_options", 00:17:41.190 "params": { 00:17:41.190 "enable_ktls": false, 00:17:41.190 "enable_placement_id": 0, 00:17:41.190 "enable_quickack": false, 00:17:41.190 "enable_recv_pipe": true, 00:17:41.190 "enable_zerocopy_send_client": false, 00:17:41.190 "enable_zerocopy_send_server": true, 00:17:41.190 "impl_name": "ssl", 00:17:41.190 "recv_buf_size": 4096, 00:17:41.190 "send_buf_size": 4096, 00:17:41.190 "tls_version": 0, 00:17:41.190 "zerocopy_threshold": 0 00:17:41.190 } 00:17:41.190 } 00:17:41.190 ] 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "subsystem": "vmd", 00:17:41.190 "config": [] 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "subsystem": "accel", 00:17:41.190 "config": [ 00:17:41.190 { 00:17:41.190 "method": "accel_set_options", 00:17:41.190 "params": { 00:17:41.190 "buf_count": 2048, 00:17:41.190 "large_cache_size": 16, 00:17:41.190 "sequence_count": 2048, 00:17:41.190 "small_cache_size": 128, 00:17:41.190 "task_count": 2048 00:17:41.190 } 00:17:41.190 } 00:17:41.190 ] 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "subsystem": "bdev", 00:17:41.190 "config": [ 00:17:41.190 { 00:17:41.190 "method": "bdev_set_options", 00:17:41.190 "params": { 00:17:41.190 "bdev_auto_examine": true, 00:17:41.190 "bdev_io_cache_size": 256, 00:17:41.190 "bdev_io_pool_size": 65535, 00:17:41.190 "iobuf_large_cache_size": 16, 00:17:41.190 "iobuf_small_cache_size": 128 00:17:41.190 } 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "method": "bdev_raid_set_options", 00:17:41.190 "params": { 00:17:41.190 "process_window_size_kb": 1024 00:17:41.190 } 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "method": "bdev_iscsi_set_options", 00:17:41.190 "params": { 00:17:41.190 "timeout_sec": 30 00:17:41.190 } 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "method": "bdev_nvme_set_options", 00:17:41.190 "params": { 00:17:41.190 "action_on_timeout": "none", 00:17:41.190 "allow_accel_sequence": false, 00:17:41.190 "arbitration_burst": 0, 00:17:41.190 "bdev_retry_count": 3, 00:17:41.190 "ctrlr_loss_timeout_sec": 0, 00:17:41.190 "delay_cmd_submit": true, 00:17:41.190 "fast_io_fail_timeout_sec": 0, 00:17:41.190 "generate_uuids": false, 00:17:41.190 "high_priority_weight": 0, 00:17:41.190 "io_path_stat": false, 00:17:41.190 "io_queue_requests": 512, 00:17:41.190 "keep_alive_timeout_ms": 10000, 00:17:41.190 "low_priority_weight": 0, 00:17:41.190 "medium_priority_weight": 0, 00:17:41.190 "nvme_adminq_poll_period_us": 10000, 00:17:41.190 "nvme_ioq_poll_period_us": 0, 00:17:41.190 "reconnect_delay_sec": 0, 00:17:41.190 "timeout_admin_us": 0, 00:17:41.190 "timeout_us": 0, 00:17:41.190 "transport_ack_timeout": 0, 00:17:41.190 "transport_retry_count": 4, 00:17:41.190 "transport_tos": 0 00:17:41.190 } 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "method": "bdev_nvme_attach_controller", 00:17:41.190 "params": { 00:17:41.190 "adrfam": "IPv4", 00:17:41.190 "ctrlr_loss_timeout_sec": 0, 00:17:41.190 "ddgst": false, 00:17:41.190 "fast_io_fail_timeout_sec": 0, 00:17:41.190 "hdgst": false, 00:17:41.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.190 "name": "TLSTEST", 00:17:41.190 "prchk_guard": false, 00:17:41.190 "prchk_reftag": false, 00:17:41.190 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:41.190 "reconnect_delay_sec": 0, 00:17:41.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.190 "traddr": "10.0.0.2", 00:17:41.190 "trsvcid": "4420", 00:17:41.190 "trtype": "TCP" 00:17:41.190 } 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "method": "bdev_nvme_set_hotplug", 00:17:41.190 "params": { 00:17:41.190 "enable": false, 00:17:41.190 "period_us": 100000 00:17:41.190 } 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "method": "bdev_wait_for_examine" 00:17:41.190 } 00:17:41.190 ] 00:17:41.190 }, 00:17:41.190 { 00:17:41.190 "subsystem": "nbd", 00:17:41.190 "config": [] 00:17:41.190 } 00:17:41.190 ] 00:17:41.190 }' 00:17:41.190 [2024-07-13 00:26:28.413269] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:41.190 [2024-07-13 00:26:28.413386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89183 ] 00:17:41.463 [2024-07-13 00:26:28.556225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.463 [2024-07-13 00:26:28.649527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.721 [2024-07-13 00:26:28.799449] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.287 00:26:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:42.287 00:26:29 -- common/autotest_common.sh@852 -- # return 0 00:17:42.287 00:26:29 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:42.287 Running I/O for 10 seconds... 00:17:54.477 00:17:54.477 Latency(us) 00:17:54.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.477 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:54.477 Verification LBA range: start 0x0 length 0x2000 00:17:54.477 TLSTESTn1 : 10.02 5729.92 22.38 0.00 0.00 22299.64 4438.57 18707.55 00:17:54.477 =================================================================================================================== 00:17:54.477 Total : 5729.92 22.38 0.00 0.00 22299.64 4438.57 18707.55 00:17:54.477 0 00:17:54.477 00:26:39 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.477 00:26:39 -- target/tls.sh@223 -- # killprocess 89183 00:17:54.477 00:26:39 -- common/autotest_common.sh@926 -- # '[' -z 89183 ']' 00:17:54.477 00:26:39 -- common/autotest_common.sh@930 -- # kill -0 89183 00:17:54.477 00:26:39 -- common/autotest_common.sh@931 -- # uname 00:17:54.477 00:26:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:54.477 00:26:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89183 00:17:54.477 00:26:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:54.477 00:26:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:54.477 killing process with pid 89183 00:17:54.477 00:26:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89183' 00:17:54.477 00:26:39 -- common/autotest_common.sh@945 -- # kill 89183 00:17:54.477 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.478 00:17:54.478 Latency(us) 00:17:54.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.478 =================================================================================================================== 00:17:54.478 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.478 00:26:39 -- common/autotest_common.sh@950 -- # wait 89183 00:17:54.478 00:26:39 -- target/tls.sh@224 -- # killprocess 89139 00:17:54.478 00:26:39 -- common/autotest_common.sh@926 -- # '[' -z 89139 ']' 00:17:54.478 00:26:39 -- common/autotest_common.sh@930 -- # kill -0 89139 00:17:54.478 00:26:39 -- common/autotest_common.sh@931 -- # uname 00:17:54.478 00:26:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:54.478 00:26:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89139 00:17:54.478 00:26:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:54.478 00:26:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:54.478 killing process with pid 89139 00:17:54.478 00:26:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89139' 00:17:54.478 00:26:39 -- common/autotest_common.sh@945 -- # kill 89139 00:17:54.478 00:26:39 -- common/autotest_common.sh@950 -- # wait 89139 00:17:54.478 00:26:40 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:54.478 00:26:40 -- target/tls.sh@227 -- # cleanup 00:17:54.478 00:26:40 -- target/tls.sh@15 -- # process_shm --id 0 00:17:54.478 00:26:40 -- common/autotest_common.sh@796 -- # type=--id 00:17:54.478 00:26:40 -- common/autotest_common.sh@797 -- # id=0 00:17:54.478 00:26:40 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:54.478 00:26:40 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:54.478 00:26:40 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:54.478 00:26:40 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:54.478 00:26:40 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:54.478 00:26:40 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:54.478 nvmf_trace.0 00:17:54.478 00:26:40 -- common/autotest_common.sh@811 -- # return 0 00:17:54.478 00:26:40 -- target/tls.sh@16 -- # killprocess 89183 00:17:54.478 00:26:40 -- common/autotest_common.sh@926 -- # '[' -z 89183 ']' 00:17:54.478 00:26:40 -- common/autotest_common.sh@930 -- # kill -0 89183 00:17:54.478 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (89183) - No such process 00:17:54.478 Process with pid 89183 is not found 00:17:54.478 00:26:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 89183 is not found' 00:17:54.478 00:26:40 -- target/tls.sh@17 -- # nvmftestfini 00:17:54.478 00:26:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:54.478 00:26:40 -- nvmf/common.sh@116 -- # sync 00:17:54.478 00:26:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:54.478 00:26:40 -- nvmf/common.sh@119 -- # set +e 00:17:54.478 00:26:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:54.478 00:26:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:54.478 rmmod nvme_tcp 00:17:54.478 rmmod nvme_fabrics 00:17:54.478 rmmod nvme_keyring 00:17:54.478 00:26:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:54.478 00:26:40 -- nvmf/common.sh@123 -- # set -e 00:17:54.478 00:26:40 -- nvmf/common.sh@124 -- # return 0 00:17:54.478 00:26:40 -- nvmf/common.sh@477 -- # '[' -n 89139 ']' 00:17:54.478 00:26:40 -- nvmf/common.sh@478 -- # killprocess 89139 00:17:54.478 00:26:40 -- common/autotest_common.sh@926 -- # '[' -z 89139 ']' 00:17:54.478 00:26:40 -- common/autotest_common.sh@930 -- # kill -0 89139 00:17:54.478 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (89139) - No such process 00:17:54.478 Process with pid 89139 is not found 00:17:54.478 00:26:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 89139 is not found' 00:17:54.478 00:26:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:54.478 00:26:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:54.478 00:26:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:54.478 00:26:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.478 00:26:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:54.478 00:26:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.478 00:26:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.478 00:26:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.478 00:26:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:54.478 00:26:40 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:54.478 00:17:54.478 real 1m11.103s 00:17:54.478 user 1m47.161s 00:17:54.478 sys 0m25.969s 00:17:54.478 00:26:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.478 ************************************ 00:17:54.478 END TEST nvmf_tls 00:17:54.478 ************************************ 00:17:54.478 00:26:40 -- common/autotest_common.sh@10 -- # set +x 00:17:54.478 00:26:40 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:54.478 00:26:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:54.478 00:26:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:54.478 00:26:40 -- common/autotest_common.sh@10 -- # set +x 00:17:54.478 ************************************ 00:17:54.478 START TEST nvmf_fips 00:17:54.478 ************************************ 00:17:54.478 00:26:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:54.478 * Looking for test storage... 00:17:54.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:54.478 00:26:40 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:54.478 00:26:40 -- nvmf/common.sh@7 -- # uname -s 00:17:54.478 00:26:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.478 00:26:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.478 00:26:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.478 00:26:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.478 00:26:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.478 00:26:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.478 00:26:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.478 00:26:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.478 00:26:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.478 00:26:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.478 00:26:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:17:54.478 00:26:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:17:54.478 00:26:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.478 00:26:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.478 00:26:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:54.478 00:26:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.478 00:26:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.478 00:26:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.478 00:26:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.478 00:26:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.478 00:26:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.478 00:26:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.478 00:26:40 -- paths/export.sh@5 -- # export PATH 00:17:54.478 00:26:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.478 00:26:40 -- nvmf/common.sh@46 -- # : 0 00:17:54.479 00:26:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:54.479 00:26:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:54.479 00:26:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:54.479 00:26:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.479 00:26:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.479 00:26:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:54.479 00:26:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:54.479 00:26:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:54.479 00:26:40 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.479 00:26:40 -- fips/fips.sh@89 -- # check_openssl_version 00:17:54.479 00:26:40 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:54.479 00:26:40 -- fips/fips.sh@85 -- # openssl version 00:17:54.479 00:26:40 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:54.479 00:26:40 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:54.479 00:26:40 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:54.479 00:26:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:54.479 00:26:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:54.479 00:26:40 -- scripts/common.sh@335 -- # IFS=.-: 00:17:54.479 00:26:40 -- scripts/common.sh@335 -- # read -ra ver1 00:17:54.479 00:26:40 -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.479 00:26:40 -- scripts/common.sh@336 -- # read -ra ver2 00:17:54.479 00:26:40 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:54.479 00:26:40 -- scripts/common.sh@339 -- # ver1_l=3 00:17:54.479 00:26:40 -- scripts/common.sh@340 -- # ver2_l=3 00:17:54.479 00:26:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:54.479 00:26:40 -- scripts/common.sh@343 -- # case "$op" in 00:17:54.479 00:26:40 -- scripts/common.sh@347 -- # : 1 00:17:54.479 00:26:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:54.479 00:26:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.479 00:26:40 -- scripts/common.sh@364 -- # decimal 3 00:17:54.479 00:26:40 -- scripts/common.sh@352 -- # local d=3 00:17:54.479 00:26:40 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:54.479 00:26:40 -- scripts/common.sh@354 -- # echo 3 00:17:54.479 00:26:40 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:54.479 00:26:40 -- scripts/common.sh@365 -- # decimal 3 00:17:54.479 00:26:40 -- scripts/common.sh@352 -- # local d=3 00:17:54.479 00:26:40 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:54.479 00:26:40 -- scripts/common.sh@354 -- # echo 3 00:17:54.479 00:26:40 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:54.479 00:26:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:54.479 00:26:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:54.479 00:26:40 -- scripts/common.sh@363 -- # (( v++ )) 00:17:54.479 00:26:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.479 00:26:40 -- scripts/common.sh@364 -- # decimal 0 00:17:54.479 00:26:40 -- scripts/common.sh@352 -- # local d=0 00:17:54.479 00:26:40 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:54.479 00:26:40 -- scripts/common.sh@354 -- # echo 0 00:17:54.479 00:26:40 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:54.479 00:26:40 -- scripts/common.sh@365 -- # decimal 0 00:17:54.479 00:26:40 -- scripts/common.sh@352 -- # local d=0 00:17:54.479 00:26:40 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:54.479 00:26:40 -- scripts/common.sh@354 -- # echo 0 00:17:54.479 00:26:40 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:54.479 00:26:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:54.479 00:26:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:54.479 00:26:40 -- scripts/common.sh@363 -- # (( v++ )) 00:17:54.479 00:26:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.479 00:26:40 -- scripts/common.sh@364 -- # decimal 9 00:17:54.479 00:26:40 -- scripts/common.sh@352 -- # local d=9 00:17:54.479 00:26:40 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:54.479 00:26:40 -- scripts/common.sh@354 -- # echo 9 00:17:54.479 00:26:40 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:54.479 00:26:40 -- scripts/common.sh@365 -- # decimal 0 00:17:54.479 00:26:40 -- scripts/common.sh@352 -- # local d=0 00:17:54.479 00:26:40 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:54.479 00:26:40 -- scripts/common.sh@354 -- # echo 0 00:17:54.479 00:26:40 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:54.479 00:26:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:54.479 00:26:40 -- scripts/common.sh@366 -- # return 0 00:17:54.479 00:26:40 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:54.479 00:26:40 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:54.479 00:26:40 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:54.479 00:26:40 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:54.479 00:26:40 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:54.479 00:26:40 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:54.479 00:26:40 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:54.479 00:26:40 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:54.479 00:26:40 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:54.479 00:26:40 -- fips/fips.sh@114 -- # build_openssl_config 00:17:54.479 00:26:40 -- fips/fips.sh@37 -- # cat 00:17:54.479 00:26:40 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:54.479 00:26:40 -- fips/fips.sh@58 -- # cat - 00:17:54.479 00:26:40 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:54.479 00:26:40 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:54.479 00:26:40 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:54.479 00:26:40 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:54.479 00:26:40 -- fips/fips.sh@117 -- # openssl list -providers 00:17:54.479 00:26:40 -- fips/fips.sh@117 -- # grep name 00:17:54.479 00:26:40 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:54.479 00:26:40 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:54.479 00:26:40 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:54.479 00:26:40 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:54.479 00:26:40 -- common/autotest_common.sh@640 -- # local es=0 00:17:54.479 00:26:40 -- fips/fips.sh@128 -- # : 00:17:54.479 00:26:40 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:54.479 00:26:40 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:54.479 00:26:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:54.479 00:26:40 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:54.479 00:26:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:54.479 00:26:40 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:54.479 00:26:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:54.479 00:26:40 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:54.479 00:26:40 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:54.479 00:26:40 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:54.479 Error setting digest 00:17:54.479 00826340F67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:54.479 00826340F67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:54.479 00:26:40 -- common/autotest_common.sh@643 -- # es=1 00:17:54.479 00:26:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:54.479 00:26:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:54.479 00:26:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:54.479 00:26:40 -- fips/fips.sh@131 -- # nvmftestinit 00:17:54.479 00:26:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:54.479 00:26:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.479 00:26:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:54.479 00:26:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:54.479 00:26:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:54.479 00:26:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.479 00:26:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.479 00:26:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.479 00:26:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:54.479 00:26:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:54.479 00:26:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:54.479 00:26:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:54.479 00:26:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:54.479 00:26:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:54.479 00:26:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.479 00:26:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.479 00:26:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:54.479 00:26:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:54.479 00:26:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:54.479 00:26:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:54.479 00:26:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:54.479 00:26:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.479 00:26:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:54.479 00:26:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:54.479 00:26:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:54.479 00:26:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:54.479 00:26:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:54.479 00:26:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:54.479 Cannot find device "nvmf_tgt_br" 00:17:54.479 00:26:40 -- nvmf/common.sh@154 -- # true 00:17:54.479 00:26:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.479 Cannot find device "nvmf_tgt_br2" 00:17:54.479 00:26:40 -- nvmf/common.sh@155 -- # true 00:17:54.479 00:26:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:54.479 00:26:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:54.479 Cannot find device "nvmf_tgt_br" 00:17:54.479 00:26:40 -- nvmf/common.sh@157 -- # true 00:17:54.479 00:26:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:54.479 Cannot find device "nvmf_tgt_br2" 00:17:54.479 00:26:40 -- nvmf/common.sh@158 -- # true 00:17:54.479 00:26:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:54.479 00:26:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:54.479 00:26:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.480 00:26:40 -- nvmf/common.sh@161 -- # true 00:17:54.480 00:26:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.480 00:26:40 -- nvmf/common.sh@162 -- # true 00:17:54.480 00:26:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:54.480 00:26:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:54.480 00:26:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:54.480 00:26:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:54.480 00:26:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:54.480 00:26:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:54.480 00:26:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:54.480 00:26:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:54.480 00:26:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:54.480 00:26:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:54.480 00:26:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:54.480 00:26:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:54.480 00:26:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:54.480 00:26:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:54.480 00:26:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:54.480 00:26:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:54.480 00:26:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:54.480 00:26:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:54.480 00:26:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:54.480 00:26:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:54.480 00:26:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:54.480 00:26:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:54.480 00:26:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:54.480 00:26:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:54.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:17:54.480 00:17:54.480 --- 10.0.0.2 ping statistics --- 00:17:54.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.480 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:17:54.480 00:26:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:54.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:54.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:17:54.480 00:17:54.480 --- 10.0.0.3 ping statistics --- 00:17:54.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.480 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:54.480 00:26:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:54.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:54.480 00:17:54.480 --- 10.0.0.1 ping statistics --- 00:17:54.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.480 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:54.480 00:26:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.480 00:26:40 -- nvmf/common.sh@421 -- # return 0 00:17:54.480 00:26:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:54.480 00:26:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.480 00:26:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:54.480 00:26:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:54.480 00:26:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.480 00:26:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:54.480 00:26:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:54.480 00:26:40 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:54.480 00:26:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:54.480 00:26:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:54.480 00:26:40 -- common/autotest_common.sh@10 -- # set +x 00:17:54.480 00:26:40 -- nvmf/common.sh@469 -- # nvmfpid=89545 00:17:54.480 00:26:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:54.480 00:26:40 -- nvmf/common.sh@470 -- # waitforlisten 89545 00:17:54.480 00:26:40 -- common/autotest_common.sh@819 -- # '[' -z 89545 ']' 00:17:54.480 00:26:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.480 00:26:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:54.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.480 00:26:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.480 00:26:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:54.480 00:26:40 -- common/autotest_common.sh@10 -- # set +x 00:17:54.480 [2024-07-13 00:26:41.069772] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:54.480 [2024-07-13 00:26:41.069886] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.480 [2024-07-13 00:26:41.213291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.480 [2024-07-13 00:26:41.309445] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:54.480 [2024-07-13 00:26:41.309667] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.480 [2024-07-13 00:26:41.309686] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.480 [2024-07-13 00:26:41.309698] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.480 [2024-07-13 00:26:41.309737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.046 00:26:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:55.046 00:26:42 -- common/autotest_common.sh@852 -- # return 0 00:17:55.046 00:26:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:55.046 00:26:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:55.046 00:26:42 -- common/autotest_common.sh@10 -- # set +x 00:17:55.046 00:26:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.046 00:26:42 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:55.046 00:26:42 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:55.046 00:26:42 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:55.046 00:26:42 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:55.046 00:26:42 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:55.046 00:26:42 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:55.046 00:26:42 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:55.046 00:26:42 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.303 [2024-07-13 00:26:42.330109] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.303 [2024-07-13 00:26:42.346039] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:55.303 [2024-07-13 00:26:42.346289] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.303 malloc0 00:17:55.303 00:26:42 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.303 00:26:42 -- fips/fips.sh@148 -- # bdevperf_pid=89604 00:17:55.303 00:26:42 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.303 00:26:42 -- fips/fips.sh@149 -- # waitforlisten 89604 /var/tmp/bdevperf.sock 00:17:55.303 00:26:42 -- common/autotest_common.sh@819 -- # '[' -z 89604 ']' 00:17:55.303 00:26:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.303 00:26:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:55.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.303 00:26:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.304 00:26:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:55.304 00:26:42 -- common/autotest_common.sh@10 -- # set +x 00:17:55.304 [2024-07-13 00:26:42.466671] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:55.304 [2024-07-13 00:26:42.466777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89604 ] 00:17:55.561 [2024-07-13 00:26:42.604167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.561 [2024-07-13 00:26:42.703229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.514 00:26:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:56.514 00:26:43 -- common/autotest_common.sh@852 -- # return 0 00:17:56.514 00:26:43 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:56.514 [2024-07-13 00:26:43.592396] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.514 TLSTESTn1 00:17:56.514 00:26:43 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:56.772 Running I/O for 10 seconds... 00:18:06.742 00:18:06.742 Latency(us) 00:18:06.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.742 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:06.742 Verification LBA range: start 0x0 length 0x2000 00:18:06.742 TLSTESTn1 : 10.02 5793.00 22.63 0.00 0.00 22060.67 5779.08 22282.24 00:18:06.742 =================================================================================================================== 00:18:06.742 Total : 5793.00 22.63 0.00 0.00 22060.67 5779.08 22282.24 00:18:06.742 0 00:18:06.742 00:26:53 -- fips/fips.sh@1 -- # cleanup 00:18:06.742 00:26:53 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:06.742 00:26:53 -- common/autotest_common.sh@796 -- # type=--id 00:18:06.742 00:26:53 -- common/autotest_common.sh@797 -- # id=0 00:18:06.742 00:26:53 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:06.742 00:26:53 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:06.742 00:26:53 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:06.742 00:26:53 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:06.742 00:26:53 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:06.742 00:26:53 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:06.742 nvmf_trace.0 00:18:06.742 00:26:53 -- common/autotest_common.sh@811 -- # return 0 00:18:06.742 00:26:53 -- fips/fips.sh@16 -- # killprocess 89604 00:18:06.742 00:26:53 -- common/autotest_common.sh@926 -- # '[' -z 89604 ']' 00:18:06.742 00:26:53 -- common/autotest_common.sh@930 -- # kill -0 89604 00:18:06.742 00:26:53 -- common/autotest_common.sh@931 -- # uname 00:18:06.742 00:26:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:06.742 00:26:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89604 00:18:06.742 00:26:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:06.742 00:26:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:06.742 killing process with pid 89604 00:18:06.742 00:26:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89604' 00:18:06.742 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.742 00:18:06.742 Latency(us) 00:18:06.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.742 =================================================================================================================== 00:18:06.742 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.742 00:26:53 -- common/autotest_common.sh@945 -- # kill 89604 00:18:06.742 00:26:53 -- common/autotest_common.sh@950 -- # wait 89604 00:18:07.000 00:26:54 -- fips/fips.sh@17 -- # nvmftestfini 00:18:07.000 00:26:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:07.000 00:26:54 -- nvmf/common.sh@116 -- # sync 00:18:07.000 00:26:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:07.000 00:26:54 -- nvmf/common.sh@119 -- # set +e 00:18:07.000 00:26:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:07.000 00:26:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:07.000 rmmod nvme_tcp 00:18:07.000 rmmod nvme_fabrics 00:18:07.000 rmmod nvme_keyring 00:18:07.000 00:26:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:07.000 00:26:54 -- nvmf/common.sh@123 -- # set -e 00:18:07.000 00:26:54 -- nvmf/common.sh@124 -- # return 0 00:18:07.000 00:26:54 -- nvmf/common.sh@477 -- # '[' -n 89545 ']' 00:18:07.000 00:26:54 -- nvmf/common.sh@478 -- # killprocess 89545 00:18:07.000 00:26:54 -- common/autotest_common.sh@926 -- # '[' -z 89545 ']' 00:18:07.000 00:26:54 -- common/autotest_common.sh@930 -- # kill -0 89545 00:18:07.000 00:26:54 -- common/autotest_common.sh@931 -- # uname 00:18:07.000 00:26:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:07.000 00:26:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89545 00:18:07.000 00:26:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:07.000 killing process with pid 89545 00:18:07.000 00:26:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:07.000 00:26:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89545' 00:18:07.000 00:26:54 -- common/autotest_common.sh@945 -- # kill 89545 00:18:07.000 00:26:54 -- common/autotest_common.sh@950 -- # wait 89545 00:18:07.568 00:26:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:07.568 00:26:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:07.568 00:26:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:07.568 00:26:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.568 00:26:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:07.568 00:26:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.568 00:26:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.568 00:26:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.568 00:26:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:07.568 00:26:54 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:07.568 00:18:07.568 real 0m14.220s 00:18:07.568 user 0m18.129s 00:18:07.568 sys 0m6.369s 00:18:07.568 00:26:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.568 00:26:54 -- common/autotest_common.sh@10 -- # set +x 00:18:07.568 ************************************ 00:18:07.568 END TEST nvmf_fips 00:18:07.568 ************************************ 00:18:07.568 00:26:54 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:07.568 00:26:54 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:07.568 00:26:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:07.568 00:26:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:07.568 00:26:54 -- common/autotest_common.sh@10 -- # set +x 00:18:07.568 ************************************ 00:18:07.568 START TEST nvmf_fuzz 00:18:07.568 ************************************ 00:18:07.568 00:26:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:07.568 * Looking for test storage... 00:18:07.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:07.568 00:26:54 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:07.568 00:26:54 -- nvmf/common.sh@7 -- # uname -s 00:18:07.568 00:26:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.568 00:26:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.568 00:26:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.568 00:26:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.568 00:26:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.568 00:26:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.568 00:26:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.568 00:26:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.568 00:26:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.568 00:26:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.568 00:26:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:18:07.568 00:26:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:18:07.568 00:26:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.568 00:26:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.568 00:26:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:07.568 00:26:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.568 00:26:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.568 00:26:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.568 00:26:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.568 00:26:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.568 00:26:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.568 00:26:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.568 00:26:54 -- paths/export.sh@5 -- # export PATH 00:18:07.568 00:26:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.568 00:26:54 -- nvmf/common.sh@46 -- # : 0 00:18:07.568 00:26:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:07.568 00:26:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:07.568 00:26:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:07.568 00:26:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.568 00:26:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.568 00:26:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:07.568 00:26:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:07.568 00:26:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:07.568 00:26:54 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:07.568 00:26:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:07.568 00:26:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.568 00:26:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:07.568 00:26:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:07.568 00:26:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:07.568 00:26:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.568 00:26:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.568 00:26:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.568 00:26:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:07.568 00:26:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:07.568 00:26:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:07.568 00:26:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:07.568 00:26:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:07.568 00:26:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:07.568 00:26:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.568 00:26:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.568 00:26:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:07.569 00:26:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:07.569 00:26:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:07.569 00:26:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:07.569 00:26:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:07.569 00:26:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.569 00:26:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:07.569 00:26:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:07.569 00:26:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:07.569 00:26:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:07.569 00:26:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:07.569 00:26:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:07.569 Cannot find device "nvmf_tgt_br" 00:18:07.569 00:26:54 -- nvmf/common.sh@154 -- # true 00:18:07.569 00:26:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.569 Cannot find device "nvmf_tgt_br2" 00:18:07.569 00:26:54 -- nvmf/common.sh@155 -- # true 00:18:07.569 00:26:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:07.827 00:26:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:07.827 Cannot find device "nvmf_tgt_br" 00:18:07.827 00:26:54 -- nvmf/common.sh@157 -- # true 00:18:07.827 00:26:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:07.827 Cannot find device "nvmf_tgt_br2" 00:18:07.827 00:26:54 -- nvmf/common.sh@158 -- # true 00:18:07.827 00:26:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:07.827 00:26:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:07.827 00:26:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.827 00:26:54 -- nvmf/common.sh@161 -- # true 00:18:07.827 00:26:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.827 00:26:54 -- nvmf/common.sh@162 -- # true 00:18:07.827 00:26:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.827 00:26:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.827 00:26:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.827 00:26:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.827 00:26:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.827 00:26:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:07.827 00:26:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:07.827 00:26:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:07.827 00:26:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:07.827 00:26:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:07.827 00:26:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:07.827 00:26:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:07.827 00:26:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:07.827 00:26:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:07.827 00:26:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:07.827 00:26:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:07.827 00:26:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:07.827 00:26:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:07.827 00:26:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:07.827 00:26:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:07.827 00:26:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:07.827 00:26:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:07.827 00:26:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:08.085 00:26:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:08.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:18:08.085 00:18:08.085 --- 10.0.0.2 ping statistics --- 00:18:08.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.085 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:08.085 00:26:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:08.085 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:08.085 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:08.085 00:18:08.085 --- 10.0.0.3 ping statistics --- 00:18:08.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.085 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:08.085 00:26:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:08.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:08.086 00:18:08.086 --- 10.0.0.1 ping statistics --- 00:18:08.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.086 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:08.086 00:26:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.086 00:26:55 -- nvmf/common.sh@421 -- # return 0 00:18:08.086 00:26:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:08.086 00:26:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.086 00:26:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:08.086 00:26:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:08.086 00:26:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.086 00:26:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:08.086 00:26:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:08.086 00:26:55 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=89941 00:18:08.086 00:26:55 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:08.086 00:26:55 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:08.086 00:26:55 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 89941 00:18:08.086 00:26:55 -- common/autotest_common.sh@819 -- # '[' -z 89941 ']' 00:18:08.086 00:26:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.086 00:26:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:08.086 00:26:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.086 00:26:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:08.086 00:26:55 -- common/autotest_common.sh@10 -- # set +x 00:18:09.019 00:26:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:09.019 00:26:56 -- common/autotest_common.sh@852 -- # return 0 00:18:09.019 00:26:56 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.019 00:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.019 00:26:56 -- common/autotest_common.sh@10 -- # set +x 00:18:09.019 00:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.019 00:26:56 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:09.019 00:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.019 00:26:56 -- common/autotest_common.sh@10 -- # set +x 00:18:09.019 Malloc0 00:18:09.019 00:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.019 00:26:56 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:09.019 00:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.019 00:26:56 -- common/autotest_common.sh@10 -- # set +x 00:18:09.019 00:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.019 00:26:56 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:09.019 00:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.019 00:26:56 -- common/autotest_common.sh@10 -- # set +x 00:18:09.019 00:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.019 00:26:56 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.019 00:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.019 00:26:56 -- common/autotest_common.sh@10 -- # set +x 00:18:09.019 00:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.019 00:26:56 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:09.019 00:26:56 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:09.584 Shutting down the fuzz application 00:18:09.584 00:26:56 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:09.842 Shutting down the fuzz application 00:18:09.842 00:26:56 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.842 00:26:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.842 00:26:56 -- common/autotest_common.sh@10 -- # set +x 00:18:09.842 00:26:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.842 00:26:56 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:09.842 00:26:56 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:09.842 00:26:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:09.842 00:26:56 -- nvmf/common.sh@116 -- # sync 00:18:09.842 00:26:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:09.842 00:26:57 -- nvmf/common.sh@119 -- # set +e 00:18:09.842 00:26:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:09.842 00:26:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:09.842 rmmod nvme_tcp 00:18:09.842 rmmod nvme_fabrics 00:18:10.100 rmmod nvme_keyring 00:18:10.100 00:26:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:10.100 00:26:57 -- nvmf/common.sh@123 -- # set -e 00:18:10.100 00:26:57 -- nvmf/common.sh@124 -- # return 0 00:18:10.100 00:26:57 -- nvmf/common.sh@477 -- # '[' -n 89941 ']' 00:18:10.100 00:26:57 -- nvmf/common.sh@478 -- # killprocess 89941 00:18:10.100 00:26:57 -- common/autotest_common.sh@926 -- # '[' -z 89941 ']' 00:18:10.100 00:26:57 -- common/autotest_common.sh@930 -- # kill -0 89941 00:18:10.100 00:26:57 -- common/autotest_common.sh@931 -- # uname 00:18:10.100 00:26:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:10.100 00:26:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89941 00:18:10.100 00:26:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:10.100 00:26:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:10.100 killing process with pid 89941 00:18:10.100 00:26:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89941' 00:18:10.100 00:26:57 -- common/autotest_common.sh@945 -- # kill 89941 00:18:10.100 00:26:57 -- common/autotest_common.sh@950 -- # wait 89941 00:18:10.358 00:26:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:10.358 00:26:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:10.358 00:26:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:10.358 00:26:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:10.358 00:26:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:10.358 00:26:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.358 00:26:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.358 00:26:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.358 00:26:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:10.358 00:26:57 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:10.358 00:18:10.358 real 0m2.779s 00:18:10.358 user 0m2.958s 00:18:10.358 sys 0m0.677s 00:18:10.358 00:26:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:10.358 ************************************ 00:18:10.358 00:26:57 -- common/autotest_common.sh@10 -- # set +x 00:18:10.358 END TEST nvmf_fuzz 00:18:10.358 ************************************ 00:18:10.358 00:26:57 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:10.358 00:26:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:10.358 00:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:10.358 00:26:57 -- common/autotest_common.sh@10 -- # set +x 00:18:10.358 ************************************ 00:18:10.358 START TEST nvmf_multiconnection 00:18:10.358 ************************************ 00:18:10.358 00:26:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:10.358 * Looking for test storage... 00:18:10.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:10.358 00:26:57 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:10.358 00:26:57 -- nvmf/common.sh@7 -- # uname -s 00:18:10.358 00:26:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.358 00:26:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.358 00:26:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.358 00:26:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.358 00:26:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.358 00:26:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.358 00:26:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.358 00:26:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.358 00:26:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.358 00:26:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.358 00:26:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:18:10.358 00:26:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:18:10.358 00:26:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.358 00:26:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.358 00:26:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:10.358 00:26:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:10.358 00:26:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.358 00:26:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.358 00:26:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.358 00:26:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.358 00:26:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.358 00:26:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.358 00:26:57 -- paths/export.sh@5 -- # export PATH 00:18:10.358 00:26:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.358 00:26:57 -- nvmf/common.sh@46 -- # : 0 00:18:10.359 00:26:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:10.359 00:26:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:10.359 00:26:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:10.359 00:26:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.359 00:26:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.359 00:26:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:10.359 00:26:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:10.359 00:26:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:10.359 00:26:57 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.359 00:26:57 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.359 00:26:57 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:10.359 00:26:57 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:10.359 00:26:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:10.359 00:26:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.359 00:26:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:10.359 00:26:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:10.359 00:26:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:10.359 00:26:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.359 00:26:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.359 00:26:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.359 00:26:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:10.359 00:26:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:10.359 00:26:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:10.359 00:26:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:10.359 00:26:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:10.359 00:26:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:10.359 00:26:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.359 00:26:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.359 00:26:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:10.359 00:26:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:10.359 00:26:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:10.359 00:26:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:10.359 00:26:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:10.359 00:26:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.359 00:26:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:10.359 00:26:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:10.359 00:26:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:10.359 00:26:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:10.359 00:26:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:10.619 00:26:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:10.619 Cannot find device "nvmf_tgt_br" 00:18:10.619 00:26:57 -- nvmf/common.sh@154 -- # true 00:18:10.619 00:26:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:10.619 Cannot find device "nvmf_tgt_br2" 00:18:10.619 00:26:57 -- nvmf/common.sh@155 -- # true 00:18:10.619 00:26:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:10.619 00:26:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:10.619 Cannot find device "nvmf_tgt_br" 00:18:10.619 00:26:57 -- nvmf/common.sh@157 -- # true 00:18:10.619 00:26:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:10.619 Cannot find device "nvmf_tgt_br2" 00:18:10.619 00:26:57 -- nvmf/common.sh@158 -- # true 00:18:10.619 00:26:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:10.619 00:26:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:10.619 00:26:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.619 00:26:57 -- nvmf/common.sh@161 -- # true 00:18:10.619 00:26:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.619 00:26:57 -- nvmf/common.sh@162 -- # true 00:18:10.619 00:26:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:10.619 00:26:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:10.619 00:26:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:10.619 00:26:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:10.619 00:26:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:10.619 00:26:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:10.620 00:26:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:10.620 00:26:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:10.620 00:26:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:10.620 00:26:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:10.620 00:26:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:10.620 00:26:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:10.620 00:26:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:10.620 00:26:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:10.620 00:26:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:10.620 00:26:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:10.620 00:26:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:10.620 00:26:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:10.620 00:26:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:10.886 00:26:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:10.886 00:26:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:10.886 00:26:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:10.886 00:26:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:10.886 00:26:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:10.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:18:10.886 00:18:10.886 --- 10.0.0.2 ping statistics --- 00:18:10.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.886 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:18:10.886 00:26:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:10.886 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:10.886 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:18:10.886 00:18:10.886 --- 10.0.0.3 ping statistics --- 00:18:10.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.886 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:10.886 00:26:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:10.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:18:10.886 00:18:10.886 --- 10.0.0.1 ping statistics --- 00:18:10.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.886 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:10.886 00:26:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.886 00:26:57 -- nvmf/common.sh@421 -- # return 0 00:18:10.886 00:26:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:10.886 00:26:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.886 00:26:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:10.886 00:26:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:10.886 00:26:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.886 00:26:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:10.886 00:26:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:10.886 00:26:57 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:10.886 00:26:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:10.886 00:26:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:10.886 00:26:57 -- common/autotest_common.sh@10 -- # set +x 00:18:10.886 00:26:57 -- nvmf/common.sh@469 -- # nvmfpid=90155 00:18:10.886 00:26:57 -- nvmf/common.sh@470 -- # waitforlisten 90155 00:18:10.886 00:26:57 -- common/autotest_common.sh@819 -- # '[' -z 90155 ']' 00:18:10.886 00:26:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.886 00:26:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:10.886 00:26:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:10.886 00:26:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.886 00:26:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:10.886 00:26:57 -- common/autotest_common.sh@10 -- # set +x 00:18:10.886 [2024-07-13 00:26:58.000715] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:10.886 [2024-07-13 00:26:58.000823] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.143 [2024-07-13 00:26:58.141680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.143 [2024-07-13 00:26:58.225849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:11.143 [2024-07-13 00:26:58.226011] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.143 [2024-07-13 00:26:58.226040] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.143 [2024-07-13 00:26:58.226063] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.143 [2024-07-13 00:26:58.226256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.143 [2024-07-13 00:26:58.226564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.143 [2024-07-13 00:26:58.226704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.143 [2024-07-13 00:26:58.226706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.076 00:26:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:12.076 00:26:59 -- common/autotest_common.sh@852 -- # return 0 00:18:12.076 00:26:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:12.076 00:26:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.076 00:26:59 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 [2024-07-13 00:26:59.065416] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:12.076 00:26:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.076 00:26:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 Malloc1 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 [2024-07-13 00:26:59.150322] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.076 00:26:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 Malloc2 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.076 00:26:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 Malloc3 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.076 00:26:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 Malloc4 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.076 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.076 00:26:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.076 00:26:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:12.076 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.076 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 Malloc5 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.335 00:26:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 Malloc6 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.335 00:26:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 Malloc7 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.335 00:26:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 Malloc8 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.335 00:26:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 Malloc9 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.335 00:26:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.335 00:26:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:12.335 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.335 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.593 Malloc10 00:18:12.593 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.593 00:26:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:12.593 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.593 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.593 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.593 00:26:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:12.593 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.593 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.593 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.593 00:26:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:12.593 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.593 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.593 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.593 00:26:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.593 00:26:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:12.593 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.593 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.593 Malloc11 00:18:12.593 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.593 00:26:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:12.593 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.593 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.593 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.593 00:26:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:12.593 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.593 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.593 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.593 00:26:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:12.593 00:26:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.593 00:26:59 -- common/autotest_common.sh@10 -- # set +x 00:18:12.593 00:26:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.593 00:26:59 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:12.593 00:26:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.593 00:26:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.851 00:26:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:12.851 00:26:59 -- common/autotest_common.sh@1177 -- # local i=0 00:18:12.851 00:26:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.851 00:26:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:12.851 00:26:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:14.750 00:27:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:14.750 00:27:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:14.750 00:27:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:18:14.750 00:27:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:14.750 00:27:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.750 00:27:01 -- common/autotest_common.sh@1187 -- # return 0 00:18:14.750 00:27:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.750 00:27:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:15.007 00:27:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:15.007 00:27:02 -- common/autotest_common.sh@1177 -- # local i=0 00:18:15.007 00:27:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:15.007 00:27:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:15.007 00:27:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:16.907 00:27:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:16.907 00:27:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:16.907 00:27:04 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:18:16.907 00:27:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:16.907 00:27:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.907 00:27:04 -- common/autotest_common.sh@1187 -- # return 0 00:18:16.907 00:27:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.907 00:27:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:17.165 00:27:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:17.166 00:27:04 -- common/autotest_common.sh@1177 -- # local i=0 00:18:17.166 00:27:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.166 00:27:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:17.166 00:27:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:19.066 00:27:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:19.066 00:27:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:18:19.066 00:27:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:19.066 00:27:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:19.066 00:27:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.066 00:27:06 -- common/autotest_common.sh@1187 -- # return 0 00:18:19.066 00:27:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:19.066 00:27:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:19.325 00:27:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:19.325 00:27:06 -- common/autotest_common.sh@1177 -- # local i=0 00:18:19.325 00:27:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.325 00:27:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:19.325 00:27:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:21.228 00:27:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:21.229 00:27:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:21.229 00:27:08 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:18:21.229 00:27:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:21.229 00:27:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.229 00:27:08 -- common/autotest_common.sh@1187 -- # return 0 00:18:21.229 00:27:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.229 00:27:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:21.486 00:27:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:21.486 00:27:08 -- common/autotest_common.sh@1177 -- # local i=0 00:18:21.486 00:27:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.486 00:27:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:21.486 00:27:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:23.380 00:27:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:23.380 00:27:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:23.380 00:27:10 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:18:23.637 00:27:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:23.637 00:27:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.637 00:27:10 -- common/autotest_common.sh@1187 -- # return 0 00:18:23.637 00:27:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:23.637 00:27:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:23.637 00:27:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:23.637 00:27:10 -- common/autotest_common.sh@1177 -- # local i=0 00:18:23.637 00:27:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.637 00:27:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:23.637 00:27:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:26.166 00:27:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:26.166 00:27:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:26.166 00:27:12 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:18:26.166 00:27:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:26.166 00:27:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.166 00:27:12 -- common/autotest_common.sh@1187 -- # return 0 00:18:26.166 00:27:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:26.166 00:27:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:26.166 00:27:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:26.166 00:27:12 -- common/autotest_common.sh@1177 -- # local i=0 00:18:26.166 00:27:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:26.166 00:27:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:26.166 00:27:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:28.065 00:27:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:28.065 00:27:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:28.065 00:27:14 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:18:28.065 00:27:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:28.065 00:27:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:28.065 00:27:15 -- common/autotest_common.sh@1187 -- # return 0 00:18:28.065 00:27:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.065 00:27:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:28.065 00:27:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:28.065 00:27:15 -- common/autotest_common.sh@1177 -- # local i=0 00:18:28.065 00:27:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.065 00:27:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:28.065 00:27:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:29.994 00:27:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:29.994 00:27:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:29.994 00:27:17 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:18:29.994 00:27:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:29.994 00:27:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.994 00:27:17 -- common/autotest_common.sh@1187 -- # return 0 00:18:29.994 00:27:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:29.994 00:27:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:30.252 00:27:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:30.252 00:27:17 -- common/autotest_common.sh@1177 -- # local i=0 00:18:30.252 00:27:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.252 00:27:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:30.252 00:27:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:32.779 00:27:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:32.779 00:27:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:32.779 00:27:19 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:18:32.779 00:27:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:32.779 00:27:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.779 00:27:19 -- common/autotest_common.sh@1187 -- # return 0 00:18:32.779 00:27:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.779 00:27:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:32.779 00:27:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:32.779 00:27:19 -- common/autotest_common.sh@1177 -- # local i=0 00:18:32.779 00:27:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:32.779 00:27:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:32.779 00:27:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:34.679 00:27:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:34.679 00:27:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:34.679 00:27:21 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:18:34.679 00:27:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:34.679 00:27:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.679 00:27:21 -- common/autotest_common.sh@1187 -- # return 0 00:18:34.679 00:27:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.679 00:27:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:34.679 00:27:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:34.679 00:27:21 -- common/autotest_common.sh@1177 -- # local i=0 00:18:34.679 00:27:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.679 00:27:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:34.679 00:27:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:37.206 00:27:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:37.206 00:27:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:37.206 00:27:23 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:18:37.206 00:27:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:37.206 00:27:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:37.206 00:27:23 -- common/autotest_common.sh@1187 -- # return 0 00:18:37.206 00:27:23 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:37.206 [global] 00:18:37.206 thread=1 00:18:37.206 invalidate=1 00:18:37.206 rw=read 00:18:37.206 time_based=1 00:18:37.206 runtime=10 00:18:37.206 ioengine=libaio 00:18:37.206 direct=1 00:18:37.206 bs=262144 00:18:37.206 iodepth=64 00:18:37.206 norandommap=1 00:18:37.206 numjobs=1 00:18:37.206 00:18:37.206 [job0] 00:18:37.206 filename=/dev/nvme0n1 00:18:37.206 [job1] 00:18:37.206 filename=/dev/nvme10n1 00:18:37.206 [job2] 00:18:37.206 filename=/dev/nvme1n1 00:18:37.206 [job3] 00:18:37.206 filename=/dev/nvme2n1 00:18:37.206 [job4] 00:18:37.206 filename=/dev/nvme3n1 00:18:37.206 [job5] 00:18:37.206 filename=/dev/nvme4n1 00:18:37.206 [job6] 00:18:37.206 filename=/dev/nvme5n1 00:18:37.206 [job7] 00:18:37.206 filename=/dev/nvme6n1 00:18:37.206 [job8] 00:18:37.206 filename=/dev/nvme7n1 00:18:37.206 [job9] 00:18:37.206 filename=/dev/nvme8n1 00:18:37.206 [job10] 00:18:37.206 filename=/dev/nvme9n1 00:18:37.206 Could not set queue depth (nvme0n1) 00:18:37.206 Could not set queue depth (nvme10n1) 00:18:37.206 Could not set queue depth (nvme1n1) 00:18:37.206 Could not set queue depth (nvme2n1) 00:18:37.206 Could not set queue depth (nvme3n1) 00:18:37.206 Could not set queue depth (nvme4n1) 00:18:37.206 Could not set queue depth (nvme5n1) 00:18:37.206 Could not set queue depth (nvme6n1) 00:18:37.206 Could not set queue depth (nvme7n1) 00:18:37.206 Could not set queue depth (nvme8n1) 00:18:37.206 Could not set queue depth (nvme9n1) 00:18:37.206 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.206 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.206 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.206 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.206 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.206 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.206 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.206 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.206 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.206 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.206 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:37.206 fio-3.35 00:18:37.206 Starting 11 threads 00:18:49.409 00:18:49.409 job0: (groupid=0, jobs=1): err= 0: pid=90632: Sat Jul 13 00:27:34 2024 00:18:49.409 read: IOPS=500, BW=125MiB/s (131MB/s)(1261MiB/10078msec) 00:18:49.409 slat (usec): min=16, max=167293, avg=1964.76, stdev=8476.25 00:18:49.409 clat (msec): min=59, max=321, avg=125.76, stdev=42.57 00:18:49.409 lat (msec): min=74, max=414, avg=127.73, stdev=43.80 00:18:49.409 clat percentiles (msec): 00:18:49.409 | 1.00th=[ 83], 5.00th=[ 91], 10.00th=[ 96], 20.00th=[ 102], 00:18:49.409 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 115], 00:18:49.409 | 70.00th=[ 120], 80.00th=[ 126], 90.00th=[ 213], 95.00th=[ 234], 00:18:49.409 | 99.00th=[ 257], 99.50th=[ 259], 99.90th=[ 264], 99.95th=[ 264], 00:18:49.409 | 99.99th=[ 321] 00:18:49.409 bw ( KiB/s): min=64000, max=164352, per=8.22%, avg=127410.45, stdev=35868.24, samples=20 00:18:49.409 iops : min= 250, max= 642, avg=497.50, stdev=140.07, samples=20 00:18:49.409 lat (msec) : 100=17.43%, 250=80.59%, 500=1.98% 00:18:49.409 cpu : usr=0.17%, sys=1.69%, ctx=1220, majf=0, minf=4097 00:18:49.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:49.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.409 issued rwts: total=5043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.409 job1: (groupid=0, jobs=1): err= 0: pid=90633: Sat Jul 13 00:27:34 2024 00:18:49.409 read: IOPS=293, BW=73.3MiB/s (76.8MB/s)(744MiB/10155msec) 00:18:49.409 slat (usec): min=20, max=213581, avg=3340.43, stdev=13926.66 00:18:49.409 clat (msec): min=27, max=404, avg=214.54, stdev=37.82 00:18:49.409 lat (msec): min=29, max=457, avg=217.88, stdev=40.27 00:18:49.409 clat percentiles (msec): 00:18:49.409 | 1.00th=[ 42], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 194], 00:18:49.409 | 30.00th=[ 205], 40.00th=[ 213], 50.00th=[ 220], 60.00th=[ 224], 00:18:49.409 | 70.00th=[ 230], 80.00th=[ 236], 90.00th=[ 249], 95.00th=[ 262], 00:18:49.409 | 99.00th=[ 321], 99.50th=[ 355], 99.90th=[ 380], 99.95th=[ 380], 00:18:49.409 | 99.99th=[ 405] 00:18:49.409 bw ( KiB/s): min=57740, max=100864, per=4.81%, avg=74500.20, stdev=10777.77, samples=20 00:18:49.409 iops : min= 225, max= 394, avg=290.90, stdev=42.13, samples=20 00:18:49.409 lat (msec) : 50=1.65%, 100=0.03%, 250=89.39%, 500=8.94% 00:18:49.409 cpu : usr=0.16%, sys=1.04%, ctx=811, majf=0, minf=4097 00:18:49.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:18:49.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.409 issued rwts: total=2977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.409 job2: (groupid=0, jobs=1): err= 0: pid=90634: Sat Jul 13 00:27:34 2024 00:18:49.409 read: IOPS=1127, BW=282MiB/s (295MB/s)(2828MiB/10038msec) 00:18:49.409 slat (usec): min=18, max=149634, avg=866.97, stdev=4645.00 00:18:49.409 clat (msec): min=8, max=312, avg=55.85, stdev=46.30 00:18:49.409 lat (msec): min=8, max=380, avg=56.72, stdev=47.14 00:18:49.409 clat percentiles (msec): 00:18:49.409 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 35], 00:18:49.409 | 30.00th=[ 37], 40.00th=[ 41], 50.00th=[ 44], 60.00th=[ 46], 00:18:49.409 | 70.00th=[ 50], 80.00th=[ 56], 90.00th=[ 87], 95.00th=[ 207], 00:18:49.409 | 99.00th=[ 249], 99.50th=[ 266], 99.90th=[ 296], 99.95th=[ 296], 00:18:49.409 | 99.99th=[ 313] 00:18:49.409 bw ( KiB/s): min=64512, max=409498, per=18.56%, avg=287741.05, stdev=141099.25, samples=20 00:18:49.409 iops : min= 252, max= 1599, avg=1123.70, stdev=551.10, samples=20 00:18:49.409 lat (msec) : 10=0.11%, 20=0.56%, 50=71.99%, 100=20.79%, 250=5.55% 00:18:49.409 lat (msec) : 500=1.00% 00:18:49.409 cpu : usr=0.33%, sys=3.75%, ctx=2440, majf=0, minf=4097 00:18:49.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:49.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.409 issued rwts: total=11313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.409 job3: (groupid=0, jobs=1): err= 0: pid=90635: Sat Jul 13 00:27:34 2024 00:18:49.409 read: IOPS=300, BW=75.2MiB/s (78.8MB/s)(764MiB/10155msec) 00:18:49.409 slat (usec): min=21, max=210329, avg=3294.42, stdev=14237.69 00:18:49.409 clat (msec): min=28, max=424, avg=209.05, stdev=30.31 00:18:49.409 lat (msec): min=28, max=448, avg=212.34, stdev=33.30 00:18:49.409 clat percentiles (msec): 00:18:49.409 | 1.00th=[ 128], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 188], 00:18:49.409 | 30.00th=[ 199], 40.00th=[ 205], 50.00th=[ 211], 60.00th=[ 218], 00:18:49.409 | 70.00th=[ 222], 80.00th=[ 230], 90.00th=[ 243], 95.00th=[ 253], 00:18:49.409 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 330], 99.95th=[ 426], 00:18:49.409 | 99.99th=[ 426] 00:18:49.409 bw ( KiB/s): min=63361, max=96063, per=4.93%, avg=76431.70, stdev=11448.34, samples=20 00:18:49.409 iops : min= 247, max= 375, avg=298.45, stdev=44.68, samples=20 00:18:49.409 lat (msec) : 50=0.07%, 100=0.69%, 250=93.58%, 500=5.66% 00:18:49.409 cpu : usr=0.08%, sys=1.05%, ctx=774, majf=0, minf=4097 00:18:49.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:18:49.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.409 issued rwts: total=3054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.409 job4: (groupid=0, jobs=1): err= 0: pid=90636: Sat Jul 13 00:27:34 2024 00:18:49.409 read: IOPS=737, BW=184MiB/s (193MB/s)(1857MiB/10070msec) 00:18:49.409 slat (usec): min=15, max=52873, avg=1305.15, stdev=4639.68 00:18:49.409 clat (msec): min=21, max=155, avg=85.33, stdev=14.76 00:18:49.409 lat (msec): min=22, max=158, avg=86.63, stdev=15.22 00:18:49.409 clat percentiles (msec): 00:18:49.409 | 1.00th=[ 54], 5.00th=[ 64], 10.00th=[ 69], 20.00th=[ 74], 00:18:49.409 | 30.00th=[ 79], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 87], 00:18:49.409 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 113], 00:18:49.409 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:18:49.409 | 99.99th=[ 157] 00:18:49.409 bw ( KiB/s): min=137453, max=210944, per=12.15%, avg=188247.10, stdev=19612.61, samples=20 00:18:49.409 iops : min= 536, max= 824, avg=735.15, stdev=76.71, samples=20 00:18:49.409 lat (msec) : 50=0.23%, 100=86.87%, 250=12.90% 00:18:49.409 cpu : usr=0.25%, sys=2.31%, ctx=1610, majf=0, minf=4097 00:18:49.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:49.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.409 issued rwts: total=7426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.409 job5: (groupid=0, jobs=1): err= 0: pid=90637: Sat Jul 13 00:27:34 2024 00:18:49.409 read: IOPS=652, BW=163MiB/s (171MB/s)(1645MiB/10090msec) 00:18:49.409 slat (usec): min=20, max=53649, avg=1499.52, stdev=4993.37 00:18:49.409 clat (msec): min=24, max=182, avg=96.44, stdev=24.89 00:18:49.409 lat (msec): min=25, max=182, avg=97.94, stdev=25.51 00:18:49.409 clat percentiles (msec): 00:18:49.410 | 1.00th=[ 45], 5.00th=[ 53], 10.00th=[ 59], 20.00th=[ 70], 00:18:49.410 | 30.00th=[ 82], 40.00th=[ 99], 50.00th=[ 104], 60.00th=[ 108], 00:18:49.410 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 125], 95.00th=[ 130], 00:18:49.410 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 182], 99.95th=[ 182], 00:18:49.410 | 99.99th=[ 182] 00:18:49.410 bw ( KiB/s): min=138199, max=267286, per=10.76%, avg=166786.55, stdev=42609.44, samples=20 00:18:49.410 iops : min= 539, max= 1044, avg=651.25, stdev=166.57, samples=20 00:18:49.410 lat (msec) : 50=3.63%, 100=39.81%, 250=56.56% 00:18:49.410 cpu : usr=0.26%, sys=2.12%, ctx=1401, majf=0, minf=4097 00:18:49.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:49.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.410 issued rwts: total=6581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.410 job6: (groupid=0, jobs=1): err= 0: pid=90638: Sat Jul 13 00:27:34 2024 00:18:49.410 read: IOPS=302, BW=75.6MiB/s (79.3MB/s)(768MiB/10150msec) 00:18:49.410 slat (usec): min=22, max=113972, avg=3256.56, stdev=10986.46 00:18:49.410 clat (msec): min=63, max=324, avg=207.95, stdev=31.97 00:18:49.410 lat (msec): min=63, max=368, avg=211.21, stdev=33.85 00:18:49.410 clat percentiles (msec): 00:18:49.410 | 1.00th=[ 81], 5.00th=[ 159], 10.00th=[ 171], 20.00th=[ 188], 00:18:49.410 | 30.00th=[ 201], 40.00th=[ 207], 50.00th=[ 211], 60.00th=[ 215], 00:18:49.410 | 70.00th=[ 220], 80.00th=[ 228], 90.00th=[ 245], 95.00th=[ 257], 00:18:49.410 | 99.00th=[ 292], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 326], 00:18:49.410 | 99.99th=[ 326] 00:18:49.410 bw ( KiB/s): min=64000, max=97596, per=4.96%, avg=76917.05, stdev=9535.11, samples=20 00:18:49.410 iops : min= 250, max= 381, avg=300.25, stdev=37.25, samples=20 00:18:49.410 lat (msec) : 100=1.69%, 250=90.75%, 500=7.56% 00:18:49.410 cpu : usr=0.09%, sys=1.23%, ctx=596, majf=0, minf=4097 00:18:49.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:18:49.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.410 issued rwts: total=3070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.410 job7: (groupid=0, jobs=1): err= 0: pid=90639: Sat Jul 13 00:27:34 2024 00:18:49.410 read: IOPS=294, BW=73.7MiB/s (77.2MB/s)(747MiB/10137msec) 00:18:49.410 slat (usec): min=22, max=202240, avg=3345.30, stdev=13058.35 00:18:49.410 clat (msec): min=127, max=364, avg=213.58, stdev=26.66 00:18:49.410 lat (msec): min=147, max=434, avg=216.93, stdev=29.49 00:18:49.410 clat percentiles (msec): 00:18:49.410 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 194], 00:18:49.410 | 30.00th=[ 203], 40.00th=[ 209], 50.00th=[ 215], 60.00th=[ 220], 00:18:49.410 | 70.00th=[ 226], 80.00th=[ 232], 90.00th=[ 245], 95.00th=[ 253], 00:18:49.410 | 99.00th=[ 279], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 363], 00:18:49.410 | 99.99th=[ 363] 00:18:49.410 bw ( KiB/s): min=52119, max=95232, per=4.83%, avg=74816.50, stdev=10426.58, samples=20 00:18:49.410 iops : min= 203, max= 372, avg=292.15, stdev=40.76, samples=20 00:18:49.410 lat (msec) : 250=93.40%, 500=6.60% 00:18:49.410 cpu : usr=0.14%, sys=1.00%, ctx=644, majf=0, minf=4097 00:18:49.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:18:49.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.410 issued rwts: total=2987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.410 job8: (groupid=0, jobs=1): err= 0: pid=90644: Sat Jul 13 00:27:34 2024 00:18:49.410 read: IOPS=652, BW=163MiB/s (171MB/s)(1646MiB/10085msec) 00:18:49.410 slat (usec): min=21, max=55792, avg=1514.36, stdev=5175.11 00:18:49.410 clat (msec): min=10, max=188, avg=96.29, stdev=25.43 00:18:49.410 lat (msec): min=10, max=189, avg=97.80, stdev=26.09 00:18:49.410 clat percentiles (msec): 00:18:49.410 | 1.00th=[ 34], 5.00th=[ 55], 10.00th=[ 62], 20.00th=[ 69], 00:18:49.410 | 30.00th=[ 81], 40.00th=[ 96], 50.00th=[ 105], 60.00th=[ 110], 00:18:49.410 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 124], 95.00th=[ 129], 00:18:49.410 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 190], 00:18:49.410 | 99.99th=[ 190] 00:18:49.410 bw ( KiB/s): min=132360, max=257536, per=10.77%, avg=166868.65, stdev=43153.14, samples=20 00:18:49.410 iops : min= 517, max= 1006, avg=651.65, stdev=168.68, samples=20 00:18:49.410 lat (msec) : 20=0.50%, 50=2.84%, 100=40.18%, 250=56.48% 00:18:49.410 cpu : usr=0.24%, sys=2.35%, ctx=1069, majf=0, minf=4097 00:18:49.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:49.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.410 issued rwts: total=6583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.410 job9: (groupid=0, jobs=1): err= 0: pid=90645: Sat Jul 13 00:27:34 2024 00:18:49.410 read: IOPS=789, BW=197MiB/s (207MB/s)(1984MiB/10054msec) 00:18:49.410 slat (usec): min=21, max=66174, avg=1255.55, stdev=4552.46 00:18:49.410 clat (msec): min=12, max=170, avg=79.72, stdev=21.64 00:18:49.410 lat (msec): min=14, max=171, avg=80.97, stdev=22.17 00:18:49.410 clat percentiles (msec): 00:18:49.410 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 66], 00:18:49.410 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 88], 00:18:49.410 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 111], 00:18:49.410 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 134], 99.95th=[ 148], 00:18:49.410 | 99.99th=[ 171] 00:18:49.410 bw ( KiB/s): min=141312, max=382976, per=13.00%, avg=201515.05, stdev=53003.48, samples=20 00:18:49.410 iops : min= 552, max= 1496, avg=787.00, stdev=207.04, samples=20 00:18:49.410 lat (msec) : 20=0.48%, 50=13.54%, 100=73.17%, 250=12.82% 00:18:49.410 cpu : usr=0.26%, sys=2.94%, ctx=1421, majf=0, minf=4097 00:18:49.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:49.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.410 issued rwts: total=7934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.410 job10: (groupid=0, jobs=1): err= 0: pid=90646: Sat Jul 13 00:27:34 2024 00:18:49.410 read: IOPS=444, BW=111MiB/s (117MB/s)(1129MiB/10154msec) 00:18:49.410 slat (usec): min=17, max=92061, avg=2157.10, stdev=7859.57 00:18:49.410 clat (msec): min=33, max=369, avg=141.36, stdev=64.01 00:18:49.410 lat (msec): min=33, max=369, avg=143.52, stdev=65.34 00:18:49.410 clat percentiles (msec): 00:18:49.410 | 1.00th=[ 62], 5.00th=[ 70], 10.00th=[ 75], 20.00th=[ 84], 00:18:49.410 | 30.00th=[ 90], 40.00th=[ 99], 50.00th=[ 108], 60.00th=[ 176], 00:18:49.410 | 70.00th=[ 205], 80.00th=[ 215], 90.00th=[ 226], 95.00th=[ 234], 00:18:49.410 | 99.00th=[ 264], 99.50th=[ 284], 99.90th=[ 372], 99.95th=[ 372], 00:18:49.410 | 99.99th=[ 372] 00:18:49.410 bw ( KiB/s): min=67584, max=204288, per=7.34%, avg=113826.65, stdev=50928.44, samples=20 00:18:49.410 iops : min= 264, max= 798, avg=444.45, stdev=198.88, samples=20 00:18:49.410 lat (msec) : 50=0.35%, 100=43.77%, 250=54.35%, 500=1.53% 00:18:49.410 cpu : usr=0.21%, sys=1.63%, ctx=887, majf=0, minf=4097 00:18:49.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:49.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:49.410 issued rwts: total=4515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.410 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:49.410 00:18:49.410 Run status group 0 (all jobs): 00:18:49.410 READ: bw=1514MiB/s (1587MB/s), 73.3MiB/s-282MiB/s (76.8MB/s-295MB/s), io=15.0GiB (16.1GB), run=10038-10155msec 00:18:49.410 00:18:49.410 Disk stats (read/write): 00:18:49.410 nvme0n1: ios=9958/0, merge=0/0, ticks=1238910/0, in_queue=1238910, util=97.28% 00:18:49.410 nvme10n1: ios=5830/0, merge=0/0, ticks=1227383/0, in_queue=1227383, util=97.29% 00:18:49.410 nvme1n1: ios=22529/0, merge=0/0, ticks=1225150/0, in_queue=1225150, util=97.25% 00:18:49.410 nvme2n1: ios=5981/0, merge=0/0, ticks=1232597/0, in_queue=1232597, util=97.79% 00:18:49.410 nvme3n1: ios=14749/0, merge=0/0, ticks=1237235/0, in_queue=1237235, util=97.27% 00:18:49.410 nvme4n1: ios=13062/0, merge=0/0, ticks=1238097/0, in_queue=1238097, util=97.76% 00:18:49.410 nvme5n1: ios=6017/0, merge=0/0, ticks=1238732/0, in_queue=1238732, util=98.42% 00:18:49.410 nvme6n1: ios=5849/0, merge=0/0, ticks=1235455/0, in_queue=1235455, util=98.20% 00:18:49.410 nvme7n1: ios=13051/0, merge=0/0, ticks=1238827/0, in_queue=1238827, util=98.69% 00:18:49.410 nvme8n1: ios=15741/0, merge=0/0, ticks=1231940/0, in_queue=1231940, util=98.42% 00:18:49.410 nvme9n1: ios=8915/0, merge=0/0, ticks=1233298/0, in_queue=1233298, util=98.92% 00:18:49.410 00:27:34 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:49.410 [global] 00:18:49.410 thread=1 00:18:49.410 invalidate=1 00:18:49.410 rw=randwrite 00:18:49.410 time_based=1 00:18:49.410 runtime=10 00:18:49.410 ioengine=libaio 00:18:49.410 direct=1 00:18:49.410 bs=262144 00:18:49.410 iodepth=64 00:18:49.410 norandommap=1 00:18:49.410 numjobs=1 00:18:49.410 00:18:49.410 [job0] 00:18:49.410 filename=/dev/nvme0n1 00:18:49.410 [job1] 00:18:49.410 filename=/dev/nvme10n1 00:18:49.410 [job2] 00:18:49.410 filename=/dev/nvme1n1 00:18:49.410 [job3] 00:18:49.410 filename=/dev/nvme2n1 00:18:49.410 [job4] 00:18:49.410 filename=/dev/nvme3n1 00:18:49.410 [job5] 00:18:49.410 filename=/dev/nvme4n1 00:18:49.410 [job6] 00:18:49.410 filename=/dev/nvme5n1 00:18:49.410 [job7] 00:18:49.410 filename=/dev/nvme6n1 00:18:49.410 [job8] 00:18:49.410 filename=/dev/nvme7n1 00:18:49.410 [job9] 00:18:49.410 filename=/dev/nvme8n1 00:18:49.410 [job10] 00:18:49.410 filename=/dev/nvme9n1 00:18:49.410 Could not set queue depth (nvme0n1) 00:18:49.410 Could not set queue depth (nvme10n1) 00:18:49.410 Could not set queue depth (nvme1n1) 00:18:49.410 Could not set queue depth (nvme2n1) 00:18:49.410 Could not set queue depth (nvme3n1) 00:18:49.410 Could not set queue depth (nvme4n1) 00:18:49.410 Could not set queue depth (nvme5n1) 00:18:49.410 Could not set queue depth (nvme6n1) 00:18:49.410 Could not set queue depth (nvme7n1) 00:18:49.410 Could not set queue depth (nvme8n1) 00:18:49.410 Could not set queue depth (nvme9n1) 00:18:49.410 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.411 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.411 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.411 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.411 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.411 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.411 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.411 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.411 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.411 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.411 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:49.411 fio-3.35 00:18:49.411 Starting 11 threads 00:18:59.385 00:18:59.385 job0: (groupid=0, jobs=1): err= 0: pid=90838: Sat Jul 13 00:27:45 2024 00:18:59.385 write: IOPS=372, BW=93.1MiB/s (97.7MB/s)(947MiB/10167msec); 0 zone resets 00:18:59.385 slat (usec): min=23, max=39092, avg=2602.44, stdev=4555.87 00:18:59.385 clat (msec): min=7, max=338, avg=169.03, stdev=22.42 00:18:59.385 lat (msec): min=9, max=338, avg=171.64, stdev=22.29 00:18:59.385 clat percentiles (msec): 00:18:59.385 | 1.00th=[ 80], 5.00th=[ 153], 10.00th=[ 153], 20.00th=[ 161], 00:18:59.385 | 30.00th=[ 163], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 174], 00:18:59.385 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 186], 95.00th=[ 188], 00:18:59.385 | 99.00th=[ 236], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 338], 00:18:59.385 | 99.99th=[ 338] 00:18:59.385 bw ( KiB/s): min=88064, max=102400, per=8.43%, avg=95360.85, stdev=5830.19, samples=20 00:18:59.385 iops : min= 344, max= 400, avg=372.45, stdev=22.77, samples=20 00:18:59.385 lat (msec) : 10=0.18%, 50=0.42%, 100=0.69%, 250=97.91%, 500=0.79% 00:18:59.385 cpu : usr=0.97%, sys=1.11%, ctx=4511, majf=0, minf=1 00:18:59.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:18:59.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.385 issued rwts: total=0,3788,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.385 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.385 job1: (groupid=0, jobs=1): err= 0: pid=90839: Sat Jul 13 00:27:45 2024 00:18:59.385 write: IOPS=371, BW=92.9MiB/s (97.4MB/s)(944MiB/10162msec); 0 zone resets 00:18:59.385 slat (usec): min=20, max=38086, avg=2625.82, stdev=4572.04 00:18:59.385 clat (msec): min=15, max=335, avg=169.46, stdev=21.71 00:18:59.385 lat (msec): min=15, max=335, avg=172.08, stdev=21.57 00:18:59.385 clat percentiles (msec): 00:18:59.385 | 1.00th=[ 78], 5.00th=[ 153], 10.00th=[ 153], 20.00th=[ 161], 00:18:59.385 | 30.00th=[ 163], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 174], 00:18:59.385 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 186], 95.00th=[ 188], 00:18:59.385 | 99.00th=[ 232], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 334], 00:18:59.385 | 99.99th=[ 334] 00:18:59.385 bw ( KiB/s): min=88064, max=102400, per=8.40%, avg=95057.30, stdev=5727.64, samples=20 00:18:59.385 iops : min= 344, max= 400, avg=371.25, stdev=22.36, samples=20 00:18:59.385 lat (msec) : 20=0.08%, 50=0.40%, 100=0.93%, 250=97.80%, 500=0.79% 00:18:59.385 cpu : usr=0.87%, sys=1.26%, ctx=4254, majf=0, minf=1 00:18:59.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:18:59.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.385 issued rwts: total=0,3777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.385 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.385 job2: (groupid=0, jobs=1): err= 0: pid=90851: Sat Jul 13 00:27:45 2024 00:18:59.385 write: IOPS=429, BW=107MiB/s (113MB/s)(1088MiB/10123msec); 0 zone resets 00:18:59.385 slat (usec): min=26, max=23256, avg=2293.14, stdev=3951.29 00:18:59.385 clat (msec): min=5, max=258, avg=146.52, stdev=20.54 00:18:59.385 lat (msec): min=5, max=258, avg=148.82, stdev=20.47 00:18:59.385 clat percentiles (msec): 00:18:59.385 | 1.00th=[ 117], 5.00th=[ 124], 10.00th=[ 130], 20.00th=[ 132], 00:18:59.385 | 30.00th=[ 133], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 00:18:59.385 | 70.00th=[ 163], 80.00th=[ 171], 90.00th=[ 174], 95.00th=[ 176], 00:18:59.385 | 99.00th=[ 180], 99.50th=[ 215], 99.90th=[ 251], 99.95th=[ 251], 00:18:59.385 | 99.99th=[ 259] 00:18:59.385 bw ( KiB/s): min=92160, max=126976, per=9.70%, avg=109763.35, stdev=12043.82, samples=20 00:18:59.385 iops : min= 360, max= 496, avg=428.75, stdev=47.06, samples=20 00:18:59.385 lat (msec) : 10=0.05%, 50=0.28%, 100=0.55%, 250=98.99%, 500=0.14% 00:18:59.385 cpu : usr=1.12%, sys=1.41%, ctx=4881, majf=0, minf=1 00:18:59.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:59.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.386 issued rwts: total=0,4351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.386 job3: (groupid=0, jobs=1): err= 0: pid=90852: Sat Jul 13 00:27:45 2024 00:18:59.386 write: IOPS=368, BW=92.1MiB/s (96.6MB/s)(935MiB/10153msec); 0 zone resets 00:18:59.386 slat (usec): min=19, max=56825, avg=2666.99, stdev=4677.90 00:18:59.386 clat (msec): min=59, max=334, avg=170.95, stdev=18.77 00:18:59.386 lat (msec): min=59, max=334, avg=173.61, stdev=18.44 00:18:59.386 clat percentiles (msec): 00:18:59.386 | 1.00th=[ 146], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:18:59.386 | 30.00th=[ 163], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 174], 00:18:59.386 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 186], 95.00th=[ 188], 00:18:59.386 | 99.00th=[ 232], 99.50th=[ 275], 99.90th=[ 326], 99.95th=[ 334], 00:18:59.386 | 99.99th=[ 334] 00:18:59.386 bw ( KiB/s): min=75776, max=102400, per=8.32%, avg=94136.70, stdev=7300.62, samples=20 00:18:59.386 iops : min= 296, max= 400, avg=367.70, stdev=28.50, samples=20 00:18:59.386 lat (msec) : 100=0.43%, 250=98.77%, 500=0.80% 00:18:59.386 cpu : usr=0.83%, sys=1.19%, ctx=3998, majf=0, minf=1 00:18:59.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:59.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.386 issued rwts: total=0,3741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.386 job4: (groupid=0, jobs=1): err= 0: pid=90853: Sat Jul 13 00:27:45 2024 00:18:59.386 write: IOPS=370, BW=92.7MiB/s (97.2MB/s)(941MiB/10159msec); 0 zone resets 00:18:59.386 slat (usec): min=19, max=39148, avg=2652.12, stdev=4579.14 00:18:59.386 clat (msec): min=15, max=334, avg=169.97, stdev=20.01 00:18:59.386 lat (msec): min=15, max=334, avg=172.62, stdev=19.76 00:18:59.386 clat percentiles (msec): 00:18:59.386 | 1.00th=[ 132], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:18:59.386 | 30.00th=[ 163], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 174], 00:18:59.386 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 186], 95.00th=[ 188], 00:18:59.386 | 99.00th=[ 232], 99.50th=[ 275], 99.90th=[ 326], 99.95th=[ 334], 00:18:59.386 | 99.99th=[ 334] 00:18:59.386 bw ( KiB/s): min=86701, max=102400, per=8.37%, avg=94749.65, stdev=6006.55, samples=20 00:18:59.386 iops : min= 338, max= 400, avg=370.05, stdev=23.48, samples=20 00:18:59.386 lat (msec) : 20=0.03%, 50=0.32%, 100=0.64%, 250=98.22%, 500=0.80% 00:18:59.386 cpu : usr=0.91%, sys=1.05%, ctx=4823, majf=0, minf=1 00:18:59.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:18:59.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.386 issued rwts: total=0,3765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.386 job5: (groupid=0, jobs=1): err= 0: pid=90854: Sat Jul 13 00:27:45 2024 00:18:59.386 write: IOPS=425, BW=106MiB/s (112MB/s)(1080MiB/10152msec); 0 zone resets 00:18:59.386 slat (usec): min=23, max=13707, avg=2309.02, stdev=4004.74 00:18:59.386 clat (msec): min=18, max=325, avg=148.01, stdev=27.82 00:18:59.386 lat (msec): min=18, max=325, avg=150.32, stdev=27.94 00:18:59.386 clat percentiles (msec): 00:18:59.386 | 1.00th=[ 107], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 120], 00:18:59.386 | 30.00th=[ 126], 40.00th=[ 144], 50.00th=[ 155], 60.00th=[ 157], 00:18:59.386 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 178], 00:18:59.386 | 99.00th=[ 205], 99.50th=[ 271], 99.90th=[ 313], 99.95th=[ 317], 00:18:59.386 | 99.99th=[ 326] 00:18:59.386 bw ( KiB/s): min=90624, max=139264, per=9.63%, avg=108951.80, stdev=17738.15, samples=20 00:18:59.386 iops : min= 354, max= 544, avg=425.50, stdev=69.14, samples=20 00:18:59.386 lat (msec) : 20=0.09%, 50=0.28%, 100=0.56%, 250=98.47%, 500=0.60% 00:18:59.386 cpu : usr=0.82%, sys=1.47%, ctx=5978, majf=0, minf=1 00:18:59.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:59.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.386 issued rwts: total=0,4320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.386 job6: (groupid=0, jobs=1): err= 0: pid=90855: Sat Jul 13 00:27:45 2024 00:18:59.386 write: IOPS=423, BW=106MiB/s (111MB/s)(1075MiB/10145msec); 0 zone resets 00:18:59.386 slat (usec): min=19, max=16308, avg=2319.86, stdev=4038.14 00:18:59.386 clat (msec): min=14, max=319, avg=148.58, stdev=28.27 00:18:59.386 lat (msec): min=14, max=319, avg=150.90, stdev=28.40 00:18:59.386 clat percentiles (msec): 00:18:59.386 | 1.00th=[ 103], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 120], 00:18:59.386 | 30.00th=[ 126], 40.00th=[ 146], 50.00th=[ 157], 60.00th=[ 159], 00:18:59.386 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 182], 00:18:59.386 | 99.00th=[ 199], 99.50th=[ 264], 99.90th=[ 309], 99.95th=[ 309], 00:18:59.386 | 99.99th=[ 321] 00:18:59.386 bw ( KiB/s): min=90112, max=139776, per=9.59%, avg=108468.85, stdev=18045.33, samples=20 00:18:59.386 iops : min= 352, max= 546, avg=423.65, stdev=70.42, samples=20 00:18:59.386 lat (msec) : 20=0.09%, 50=0.37%, 100=0.47%, 250=98.47%, 500=0.60% 00:18:59.386 cpu : usr=1.22%, sys=1.04%, ctx=4069, majf=0, minf=1 00:18:59.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:59.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.386 issued rwts: total=0,4301,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.386 job7: (groupid=0, jobs=1): err= 0: pid=90856: Sat Jul 13 00:27:45 2024 00:18:59.386 write: IOPS=428, BW=107MiB/s (112MB/s)(1085MiB/10125msec); 0 zone resets 00:18:59.386 slat (usec): min=22, max=26829, avg=2299.53, stdev=3979.61 00:18:59.386 clat (msec): min=26, max=260, avg=146.89, stdev=20.44 00:18:59.386 lat (msec): min=26, max=260, avg=149.19, stdev=20.36 00:18:59.386 clat percentiles (msec): 00:18:59.386 | 1.00th=[ 122], 5.00th=[ 124], 10.00th=[ 130], 20.00th=[ 132], 00:18:59.386 | 30.00th=[ 133], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 00:18:59.386 | 70.00th=[ 163], 80.00th=[ 171], 90.00th=[ 174], 95.00th=[ 176], 00:18:59.386 | 99.00th=[ 192], 99.50th=[ 218], 99.90th=[ 253], 99.95th=[ 253], 00:18:59.386 | 99.99th=[ 262] 00:18:59.386 bw ( KiB/s): min=88064, max=124928, per=9.68%, avg=109516.55, stdev=12377.30, samples=20 00:18:59.386 iops : min= 344, max= 488, avg=427.75, stdev=48.41, samples=20 00:18:59.386 lat (msec) : 50=0.28%, 100=0.37%, 250=99.22%, 500=0.14% 00:18:59.386 cpu : usr=1.06%, sys=1.13%, ctx=5044, majf=0, minf=1 00:18:59.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:59.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.386 issued rwts: total=0,4341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.386 job8: (groupid=0, jobs=1): err= 0: pid=90857: Sat Jul 13 00:27:45 2024 00:18:59.386 write: IOPS=428, BW=107MiB/s (112MB/s)(1088MiB/10157msec); 0 zone resets 00:18:59.386 slat (usec): min=21, max=18736, avg=2230.46, stdev=3993.74 00:18:59.386 clat (usec): min=1525, max=324374, avg=147023.41, stdev=35059.54 00:18:59.386 lat (usec): min=1585, max=324415, avg=149253.87, stdev=35309.47 00:18:59.386 clat percentiles (msec): 00:18:59.386 | 1.00th=[ 13], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 120], 00:18:59.386 | 30.00th=[ 122], 40.00th=[ 146], 50.00th=[ 157], 60.00th=[ 157], 00:18:59.386 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 182], 00:18:59.386 | 99.00th=[ 259], 99.50th=[ 279], 99.90th=[ 313], 99.95th=[ 313], 00:18:59.386 | 99.99th=[ 326] 00:18:59.386 bw ( KiB/s): min=90624, max=139264, per=9.70%, avg=109775.50, stdev=19096.97, samples=20 00:18:59.386 iops : min= 354, max= 544, avg=428.75, stdev=74.54, samples=20 00:18:59.386 lat (msec) : 2=0.02%, 4=0.16%, 10=0.46%, 20=0.55%, 50=0.78% 00:18:59.386 lat (msec) : 100=1.56%, 250=95.31%, 500=1.15% 00:18:59.386 cpu : usr=1.05%, sys=1.29%, ctx=3962, majf=0, minf=1 00:18:59.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:59.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.386 issued rwts: total=0,4352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.386 job9: (groupid=0, jobs=1): err= 0: pid=90861: Sat Jul 13 00:27:45 2024 00:18:59.386 write: IOPS=380, BW=95.1MiB/s (99.7MB/s)(965MiB/10149msec); 0 zone resets 00:18:59.386 slat (usec): min=20, max=38093, avg=2547.94, stdev=4477.53 00:18:59.386 clat (msec): min=33, max=318, avg=165.69, stdev=19.24 00:18:59.386 lat (msec): min=33, max=318, avg=168.24, stdev=19.10 00:18:59.386 clat percentiles (msec): 00:18:59.386 | 1.00th=[ 82], 5.00th=[ 146], 10.00th=[ 150], 20.00th=[ 157], 00:18:59.386 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:18:59.386 | 70.00th=[ 176], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 180], 00:18:59.386 | 99.00th=[ 220], 99.50th=[ 264], 99.90th=[ 309], 99.95th=[ 317], 00:18:59.386 | 99.99th=[ 317] 00:18:59.386 bw ( KiB/s): min=83968, max=118784, per=8.58%, avg=97148.20, stdev=7591.47, samples=20 00:18:59.386 iops : min= 328, max= 464, avg=379.45, stdev=29.65, samples=20 00:18:59.386 lat (msec) : 50=0.10%, 100=1.45%, 250=97.77%, 500=0.67% 00:18:59.386 cpu : usr=0.94%, sys=1.14%, ctx=4522, majf=0, minf=1 00:18:59.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:59.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.386 issued rwts: total=0,3859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.386 job10: (groupid=0, jobs=1): err= 0: pid=90863: Sat Jul 13 00:27:45 2024 00:18:59.386 write: IOPS=429, BW=107MiB/s (113MB/s)(1087MiB/10128msec); 0 zone resets 00:18:59.386 slat (usec): min=25, max=19269, avg=2294.88, stdev=3949.56 00:18:59.386 clat (msec): min=4, max=264, avg=146.64, stdev=20.88 00:18:59.386 lat (msec): min=4, max=264, avg=148.93, stdev=20.82 00:18:59.386 clat percentiles (msec): 00:18:59.386 | 1.00th=[ 116], 5.00th=[ 124], 10.00th=[ 130], 20.00th=[ 132], 00:18:59.386 | 30.00th=[ 133], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 00:18:59.386 | 70.00th=[ 163], 80.00th=[ 171], 90.00th=[ 174], 95.00th=[ 176], 00:18:59.386 | 99.00th=[ 182], 99.50th=[ 222], 99.90th=[ 255], 99.95th=[ 255], 00:18:59.386 | 99.99th=[ 266] 00:18:59.386 bw ( KiB/s): min=91136, max=125440, per=9.69%, avg=109684.30, stdev=12169.18, samples=20 00:18:59.387 iops : min= 356, max= 490, avg=428.45, stdev=47.53, samples=20 00:18:59.387 lat (msec) : 10=0.07%, 50=0.25%, 100=0.55%, 250=98.99%, 500=0.14% 00:18:59.387 cpu : usr=1.07%, sys=1.42%, ctx=5780, majf=0, minf=1 00:18:59.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:59.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:59.387 issued rwts: total=0,4348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.387 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:59.387 00:18:59.387 Run status group 0 (all jobs): 00:18:59.387 WRITE: bw=1105MiB/s (1159MB/s), 92.1MiB/s-107MiB/s (96.6MB/s-113MB/s), io=11.0GiB (11.8GB), run=10123-10167msec 00:18:59.387 00:18:59.387 Disk stats (read/write): 00:18:59.387 nvme0n1: ios=49/7412, merge=0/0, ticks=50/1205563, in_queue=1205613, util=97.46% 00:18:59.387 nvme10n1: ios=49/7381, merge=0/0, ticks=60/1204609, in_queue=1204669, util=97.55% 00:18:59.387 nvme1n1: ios=28/8521, merge=0/0, ticks=25/1207005, in_queue=1207030, util=97.67% 00:18:59.387 nvme2n1: ios=0/7310, merge=0/0, ticks=0/1203479, in_queue=1203479, util=97.57% 00:18:59.387 nvme3n1: ios=0/7357, merge=0/0, ticks=0/1203920, in_queue=1203920, util=97.72% 00:18:59.387 nvme4n1: ios=0/8476, merge=0/0, ticks=0/1205581, in_queue=1205581, util=98.15% 00:18:59.387 nvme5n1: ios=0/8430, merge=0/0, ticks=0/1204313, in_queue=1204313, util=98.17% 00:18:59.387 nvme6n1: ios=0/8503, merge=0/0, ticks=0/1207189, in_queue=1207189, util=98.30% 00:18:59.387 nvme7n1: ios=0/8538, merge=0/0, ticks=0/1206722, in_queue=1206722, util=98.65% 00:18:59.387 nvme8n1: ios=0/7545, merge=0/0, ticks=0/1205595, in_queue=1205595, util=98.68% 00:18:59.387 nvme9n1: ios=0/8527, merge=0/0, ticks=0/1208108, in_queue=1208108, util=98.87% 00:18:59.387 00:27:45 -- target/multiconnection.sh@36 -- # sync 00:18:59.387 00:27:45 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:59.387 00:27:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.387 00:27:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:59.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.387 00:27:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:59.387 00:27:45 -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.387 00:27:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:18:59.387 00:27:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:59.387 00:27:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.387 00:27:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:18:59.387 00:27:45 -- common/autotest_common.sh@1210 -- # return 0 00:18:59.387 00:27:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.387 00:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:59.387 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:18:59.387 00:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:59.387 00:27:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.387 00:27:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:59.387 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:59.387 00:27:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:59.387 00:27:45 -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.387 00:27:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:18:59.387 00:27:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:59.387 00:27:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.387 00:27:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:18:59.387 00:27:45 -- common/autotest_common.sh@1210 -- # return 0 00:18:59.387 00:27:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:59.387 00:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:59.387 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:18:59.387 00:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:59.387 00:27:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.387 00:27:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:59.387 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:59.387 00:27:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:59.387 00:27:45 -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.387 00:27:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:18:59.387 00:27:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:59.387 00:27:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:18:59.387 00:27:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.387 00:27:45 -- common/autotest_common.sh@1210 -- # return 0 00:18:59.387 00:27:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:59.387 00:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:59.387 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:18:59.387 00:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:59.387 00:27:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.387 00:27:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:59.387 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:59.387 00:27:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:59.387 00:27:45 -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.387 00:27:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:18:59.387 00:27:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:59.387 00:27:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:18:59.387 00:27:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.387 00:27:45 -- common/autotest_common.sh@1210 -- # return 0 00:18:59.387 00:27:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:59.387 00:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:59.387 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:18:59.387 00:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:59.387 00:27:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.387 00:27:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:59.387 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:59.387 00:27:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:59.387 00:27:46 -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.387 00:27:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:59.387 00:27:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:18:59.387 00:27:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.387 00:27:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:18:59.387 00:27:46 -- common/autotest_common.sh@1210 -- # return 0 00:18:59.387 00:27:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:59.387 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:59.387 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:59.387 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:59.387 00:27:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.387 00:27:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:59.387 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:59.387 00:27:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:59.387 00:27:46 -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.387 00:27:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:59.387 00:27:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:18:59.387 00:27:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.387 00:27:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:18:59.387 00:27:46 -- common/autotest_common.sh@1210 -- # return 0 00:18:59.387 00:27:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:59.387 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:59.387 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:59.387 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:59.387 00:27:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.387 00:27:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:59.387 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:59.387 00:27:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:59.387 00:27:46 -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.387 00:27:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:18:59.387 00:27:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:59.387 00:27:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.387 00:27:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:18:59.387 00:27:46 -- common/autotest_common.sh@1210 -- # return 0 00:18:59.387 00:27:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:59.387 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:59.387 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:59.387 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:59.387 00:27:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.387 00:27:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:59.387 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:59.387 00:27:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:59.387 00:27:46 -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.387 00:27:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:18:59.387 00:27:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:59.387 00:27:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.387 00:27:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:18:59.387 00:27:46 -- common/autotest_common.sh@1210 -- # return 0 00:18:59.387 00:27:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:59.387 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:59.387 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:59.387 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:59.387 00:27:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.387 00:27:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:59.387 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:59.387 00:27:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:59.387 00:27:46 -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.387 00:27:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:18:59.387 00:27:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:59.387 00:27:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.387 00:27:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:18:59.387 00:27:46 -- common/autotest_common.sh@1210 -- # return 0 00:18:59.387 00:27:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:59.387 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:59.387 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:59.387 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:59.387 00:27:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.387 00:27:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:59.387 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:59.388 00:27:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:59.388 00:27:46 -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.388 00:27:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:59.388 00:27:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:18:59.388 00:27:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:18:59.388 00:27:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.388 00:27:46 -- common/autotest_common.sh@1210 -- # return 0 00:18:59.388 00:27:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:59.388 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:59.388 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:59.388 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:59.388 00:27:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.388 00:27:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:59.388 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:59.388 00:27:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:59.388 00:27:46 -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.646 00:27:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:59.646 00:27:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:18:59.646 00:27:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:59.646 00:27:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:18:59.646 00:27:46 -- common/autotest_common.sh@1210 -- # return 0 00:18:59.646 00:27:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:59.646 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:59.646 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:59.646 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:59.646 00:27:46 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:59.646 00:27:46 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:59.646 00:27:46 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:59.646 00:27:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:59.646 00:27:46 -- nvmf/common.sh@116 -- # sync 00:18:59.646 00:27:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:59.646 00:27:46 -- nvmf/common.sh@119 -- # set +e 00:18:59.646 00:27:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:59.646 00:27:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:59.646 rmmod nvme_tcp 00:18:59.646 rmmod nvme_fabrics 00:18:59.646 rmmod nvme_keyring 00:18:59.646 00:27:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:59.646 00:27:46 -- nvmf/common.sh@123 -- # set -e 00:18:59.646 00:27:46 -- nvmf/common.sh@124 -- # return 0 00:18:59.646 00:27:46 -- nvmf/common.sh@477 -- # '[' -n 90155 ']' 00:18:59.646 00:27:46 -- nvmf/common.sh@478 -- # killprocess 90155 00:18:59.646 00:27:46 -- common/autotest_common.sh@926 -- # '[' -z 90155 ']' 00:18:59.646 00:27:46 -- common/autotest_common.sh@930 -- # kill -0 90155 00:18:59.647 00:27:46 -- common/autotest_common.sh@931 -- # uname 00:18:59.647 00:27:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:59.647 00:27:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90155 00:18:59.647 00:27:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:59.647 00:27:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:59.647 killing process with pid 90155 00:18:59.647 00:27:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90155' 00:18:59.647 00:27:46 -- common/autotest_common.sh@945 -- # kill 90155 00:18:59.647 00:27:46 -- common/autotest_common.sh@950 -- # wait 90155 00:19:00.214 00:27:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:00.214 00:27:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:00.214 00:27:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:00.214 00:27:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:00.214 00:27:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:00.214 00:27:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.214 00:27:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.214 00:27:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.214 00:27:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:00.214 ************************************ 00:19:00.214 END TEST nvmf_multiconnection 00:19:00.214 ************************************ 00:19:00.214 00:19:00.214 real 0m49.802s 00:19:00.214 user 2m48.748s 00:19:00.214 sys 0m22.702s 00:19:00.214 00:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:00.214 00:27:47 -- common/autotest_common.sh@10 -- # set +x 00:19:00.214 00:27:47 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:00.214 00:27:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:00.214 00:27:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:00.214 00:27:47 -- common/autotest_common.sh@10 -- # set +x 00:19:00.214 ************************************ 00:19:00.214 START TEST nvmf_initiator_timeout 00:19:00.214 ************************************ 00:19:00.214 00:27:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:00.214 * Looking for test storage... 00:19:00.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:00.214 00:27:47 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:00.214 00:27:47 -- nvmf/common.sh@7 -- # uname -s 00:19:00.214 00:27:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.214 00:27:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.214 00:27:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.214 00:27:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.214 00:27:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.214 00:27:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.214 00:27:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.214 00:27:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.214 00:27:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.214 00:27:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.214 00:27:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:19:00.214 00:27:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:19:00.214 00:27:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.214 00:27:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.214 00:27:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:00.214 00:27:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:00.214 00:27:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.214 00:27:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.214 00:27:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.214 00:27:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.214 00:27:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.214 00:27:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.214 00:27:47 -- paths/export.sh@5 -- # export PATH 00:19:00.214 00:27:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.214 00:27:47 -- nvmf/common.sh@46 -- # : 0 00:19:00.214 00:27:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:00.214 00:27:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:00.214 00:27:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:00.214 00:27:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.214 00:27:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.214 00:27:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:00.214 00:27:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:00.214 00:27:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:00.214 00:27:47 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:00.214 00:27:47 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:00.214 00:27:47 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:00.214 00:27:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:00.214 00:27:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.214 00:27:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:00.214 00:27:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:00.214 00:27:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:00.214 00:27:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.214 00:27:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.214 00:27:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.214 00:27:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:00.215 00:27:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:00.215 00:27:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:00.215 00:27:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:00.215 00:27:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:00.215 00:27:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:00.215 00:27:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.215 00:27:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.215 00:27:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:00.215 00:27:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:00.215 00:27:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:00.215 00:27:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:00.215 00:27:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:00.215 00:27:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.215 00:27:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:00.215 00:27:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:00.215 00:27:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:00.215 00:27:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:00.215 00:27:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:00.473 00:27:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:00.474 Cannot find device "nvmf_tgt_br" 00:19:00.474 00:27:47 -- nvmf/common.sh@154 -- # true 00:19:00.474 00:27:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:00.474 Cannot find device "nvmf_tgt_br2" 00:19:00.474 00:27:47 -- nvmf/common.sh@155 -- # true 00:19:00.474 00:27:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:00.474 00:27:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:00.474 Cannot find device "nvmf_tgt_br" 00:19:00.474 00:27:47 -- nvmf/common.sh@157 -- # true 00:19:00.474 00:27:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:00.474 Cannot find device "nvmf_tgt_br2" 00:19:00.474 00:27:47 -- nvmf/common.sh@158 -- # true 00:19:00.474 00:27:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:00.474 00:27:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:00.474 00:27:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:00.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.474 00:27:47 -- nvmf/common.sh@161 -- # true 00:19:00.474 00:27:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:00.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.474 00:27:47 -- nvmf/common.sh@162 -- # true 00:19:00.474 00:27:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:00.474 00:27:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:00.474 00:27:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:00.474 00:27:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:00.474 00:27:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:00.474 00:27:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:00.474 00:27:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:00.474 00:27:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:00.474 00:27:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:00.474 00:27:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:00.474 00:27:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:00.474 00:27:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:00.474 00:27:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:00.474 00:27:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:00.474 00:27:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:00.474 00:27:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:00.733 00:27:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:00.733 00:27:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:00.733 00:27:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:00.733 00:27:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:00.733 00:27:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:00.733 00:27:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:00.733 00:27:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.733 00:27:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:00.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:19:00.733 00:19:00.733 --- 10.0.0.2 ping statistics --- 00:19:00.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.733 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:00.733 00:27:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:00.733 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.733 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:19:00.733 00:19:00.733 --- 10.0.0.3 ping statistics --- 00:19:00.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.733 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:00.733 00:27:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:00.733 00:19:00.733 --- 10.0.0.1 ping statistics --- 00:19:00.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.733 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:00.733 00:27:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.733 00:27:47 -- nvmf/common.sh@421 -- # return 0 00:19:00.733 00:27:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:00.733 00:27:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.733 00:27:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:00.733 00:27:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:00.733 00:27:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.733 00:27:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:00.733 00:27:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:00.733 00:27:47 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:00.733 00:27:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:00.733 00:27:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:00.734 00:27:47 -- common/autotest_common.sh@10 -- # set +x 00:19:00.734 00:27:47 -- nvmf/common.sh@469 -- # nvmfpid=91230 00:19:00.734 00:27:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:00.734 00:27:47 -- nvmf/common.sh@470 -- # waitforlisten 91230 00:19:00.734 00:27:47 -- common/autotest_common.sh@819 -- # '[' -z 91230 ']' 00:19:00.734 00:27:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.734 00:27:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:00.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.734 00:27:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.734 00:27:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:00.734 00:27:47 -- common/autotest_common.sh@10 -- # set +x 00:19:00.734 [2024-07-13 00:27:47.863951] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:00.734 [2024-07-13 00:27:47.864051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.992 [2024-07-13 00:27:48.008683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:00.992 [2024-07-13 00:27:48.125938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:00.992 [2024-07-13 00:27:48.126137] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.992 [2024-07-13 00:27:48.126153] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.992 [2024-07-13 00:27:48.126166] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.992 [2024-07-13 00:27:48.126382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.992 [2024-07-13 00:27:48.126556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.992 [2024-07-13 00:27:48.126660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.992 [2024-07-13 00:27:48.126665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.927 00:27:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:01.927 00:27:48 -- common/autotest_common.sh@852 -- # return 0 00:19:01.927 00:27:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:01.927 00:27:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:01.927 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 00:27:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.927 00:27:48 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:01.927 00:27:48 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:01.927 00:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.927 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 Malloc0 00:19:01.927 00:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.927 00:27:48 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:01.927 00:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.927 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 Delay0 00:19:01.927 00:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.927 00:27:48 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:01.927 00:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.927 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 [2024-07-13 00:27:48.899273] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.927 00:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.927 00:27:48 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:01.927 00:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.927 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 00:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.927 00:27:48 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:01.927 00:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.927 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 00:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.927 00:27:48 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.927 00:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.927 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 [2024-07-13 00:27:48.927568] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.927 00:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.927 00:27:48 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:01.927 00:27:49 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:01.927 00:27:49 -- common/autotest_common.sh@1177 -- # local i=0 00:19:01.927 00:27:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:01.927 00:27:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:01.927 00:27:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:04.491 00:27:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:04.491 00:27:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:04.491 00:27:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.491 00:27:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:04.491 00:27:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.491 00:27:51 -- common/autotest_common.sh@1187 -- # return 0 00:19:04.491 00:27:51 -- target/initiator_timeout.sh@35 -- # fio_pid=91312 00:19:04.491 00:27:51 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:04.491 00:27:51 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:04.491 [global] 00:19:04.491 thread=1 00:19:04.491 invalidate=1 00:19:04.491 rw=write 00:19:04.491 time_based=1 00:19:04.491 runtime=60 00:19:04.491 ioengine=libaio 00:19:04.491 direct=1 00:19:04.491 bs=4096 00:19:04.491 iodepth=1 00:19:04.491 norandommap=0 00:19:04.491 numjobs=1 00:19:04.491 00:19:04.491 verify_dump=1 00:19:04.491 verify_backlog=512 00:19:04.491 verify_state_save=0 00:19:04.491 do_verify=1 00:19:04.491 verify=crc32c-intel 00:19:04.491 [job0] 00:19:04.491 filename=/dev/nvme0n1 00:19:04.491 Could not set queue depth (nvme0n1) 00:19:04.491 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.491 fio-3.35 00:19:04.491 Starting 1 thread 00:19:07.029 00:27:54 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:07.029 00:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:07.029 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:19:07.029 true 00:19:07.029 00:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:07.029 00:27:54 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:07.029 00:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:07.029 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:19:07.029 true 00:19:07.029 00:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:07.029 00:27:54 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:07.029 00:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:07.029 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:19:07.029 true 00:19:07.029 00:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:07.029 00:27:54 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:07.029 00:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:07.029 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:19:07.029 true 00:19:07.029 00:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:07.029 00:27:54 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:10.317 00:27:57 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:10.317 00:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:10.317 00:27:57 -- common/autotest_common.sh@10 -- # set +x 00:19:10.317 true 00:19:10.317 00:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:10.317 00:27:57 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:10.317 00:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:10.317 00:27:57 -- common/autotest_common.sh@10 -- # set +x 00:19:10.317 true 00:19:10.317 00:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:10.317 00:27:57 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:10.317 00:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:10.317 00:27:57 -- common/autotest_common.sh@10 -- # set +x 00:19:10.317 true 00:19:10.317 00:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:10.317 00:27:57 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:10.317 00:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:10.317 00:27:57 -- common/autotest_common.sh@10 -- # set +x 00:19:10.317 true 00:19:10.317 00:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:10.317 00:27:57 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:10.317 00:27:57 -- target/initiator_timeout.sh@54 -- # wait 91312 00:20:06.545 00:20:06.545 job0: (groupid=0, jobs=1): err= 0: pid=91333: Sat Jul 13 00:28:51 2024 00:20:06.545 read: IOPS=708, BW=2833KiB/s (2901kB/s)(166MiB/60000msec) 00:20:06.545 slat (usec): min=12, max=13788, avg=15.40, stdev=90.86 00:20:06.545 clat (usec): min=173, max=40540k, avg=1186.63, stdev=196654.68 00:20:06.545 lat (usec): min=188, max=40540k, avg=1202.03, stdev=196654.72 00:20:06.545 clat percentiles (usec): 00:20:06.545 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:20:06.545 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 235], 00:20:06.545 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:20:06.545 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 355], 99.95th=[ 465], 00:20:06.545 | 99.99th=[ 742] 00:20:06.545 write: IOPS=715, BW=2863KiB/s (2932kB/s)(168MiB/60000msec); 0 zone resets 00:20:06.545 slat (usec): min=17, max=778, avg=21.86, stdev= 6.85 00:20:06.545 clat (usec): min=132, max=1145, avg=182.42, stdev=24.33 00:20:06.546 lat (usec): min=153, max=1166, avg=204.29, stdev=25.75 00:20:06.546 clat percentiles (usec): 00:20:06.546 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:20:06.546 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:20:06.546 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 223], 00:20:06.546 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 297], 99.95th=[ 351], 00:20:06.546 | 99.99th=[ 791] 00:20:06.546 bw ( KiB/s): min= 4096, max=10472, per=100.00%, avg=8610.41, stdev=1265.26, samples=39 00:20:06.546 iops : min= 1024, max= 2618, avg=2152.59, stdev=316.32, samples=39 00:20:06.546 lat (usec) : 250=89.62%, 500=10.34%, 750=0.02%, 1000=0.01% 00:20:06.546 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:20:06.546 cpu : usr=0.50%, sys=1.92%, ctx=85449, majf=0, minf=2 00:20:06.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:06.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.546 issued rwts: total=42496,42949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:06.546 00:20:06.546 Run status group 0 (all jobs): 00:20:06.546 READ: bw=2833KiB/s (2901kB/s), 2833KiB/s-2833KiB/s (2901kB/s-2901kB/s), io=166MiB (174MB), run=60000-60000msec 00:20:06.546 WRITE: bw=2863KiB/s (2932kB/s), 2863KiB/s-2863KiB/s (2932kB/s-2932kB/s), io=168MiB (176MB), run=60000-60000msec 00:20:06.546 00:20:06.546 Disk stats (read/write): 00:20:06.546 nvme0n1: ios=42657/42496, merge=0/0, ticks=10163/8199, in_queue=18362, util=99.69% 00:20:06.546 00:28:51 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:06.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:06.546 00:28:51 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:06.546 00:28:51 -- common/autotest_common.sh@1198 -- # local i=0 00:20:06.546 00:28:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:06.546 00:28:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.546 00:28:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:06.546 00:28:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.546 nvmf hotplug test: fio successful as expected 00:20:06.546 00:28:51 -- common/autotest_common.sh@1210 -- # return 0 00:20:06.546 00:28:51 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:06.546 00:28:51 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:06.546 00:28:51 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.546 00:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.546 00:28:51 -- common/autotest_common.sh@10 -- # set +x 00:20:06.546 00:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.546 00:28:51 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:06.546 00:28:51 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:06.546 00:28:51 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:06.546 00:28:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:06.546 00:28:51 -- nvmf/common.sh@116 -- # sync 00:20:06.546 00:28:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:06.546 00:28:51 -- nvmf/common.sh@119 -- # set +e 00:20:06.546 00:28:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:06.546 00:28:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:06.546 rmmod nvme_tcp 00:20:06.546 rmmod nvme_fabrics 00:20:06.546 rmmod nvme_keyring 00:20:06.546 00:28:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:06.546 00:28:51 -- nvmf/common.sh@123 -- # set -e 00:20:06.546 00:28:51 -- nvmf/common.sh@124 -- # return 0 00:20:06.546 00:28:51 -- nvmf/common.sh@477 -- # '[' -n 91230 ']' 00:20:06.546 00:28:51 -- nvmf/common.sh@478 -- # killprocess 91230 00:20:06.546 00:28:51 -- common/autotest_common.sh@926 -- # '[' -z 91230 ']' 00:20:06.546 00:28:51 -- common/autotest_common.sh@930 -- # kill -0 91230 00:20:06.546 00:28:51 -- common/autotest_common.sh@931 -- # uname 00:20:06.546 00:28:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:06.546 00:28:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91230 00:20:06.546 killing process with pid 91230 00:20:06.546 00:28:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:06.546 00:28:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:06.546 00:28:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91230' 00:20:06.546 00:28:51 -- common/autotest_common.sh@945 -- # kill 91230 00:20:06.546 00:28:51 -- common/autotest_common.sh@950 -- # wait 91230 00:20:06.546 00:28:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:06.546 00:28:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:06.546 00:28:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:06.546 00:28:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.546 00:28:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:06.546 00:28:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.546 00:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.546 00:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.546 00:28:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:06.546 00:20:06.546 real 1m4.648s 00:20:06.546 user 4m7.361s 00:20:06.546 sys 0m8.035s 00:20:06.546 00:28:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.546 ************************************ 00:20:06.546 END TEST nvmf_initiator_timeout 00:20:06.546 ************************************ 00:20:06.546 00:28:51 -- common/autotest_common.sh@10 -- # set +x 00:20:06.546 00:28:52 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:06.546 00:28:52 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:06.546 00:28:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:06.546 00:28:52 -- common/autotest_common.sh@10 -- # set +x 00:20:06.546 00:28:52 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:06.546 00:28:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:06.546 00:28:52 -- common/autotest_common.sh@10 -- # set +x 00:20:06.546 00:28:52 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:06.546 00:28:52 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:06.546 00:28:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:06.546 00:28:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:06.546 00:28:52 -- common/autotest_common.sh@10 -- # set +x 00:20:06.546 ************************************ 00:20:06.546 START TEST nvmf_multicontroller 00:20:06.546 ************************************ 00:20:06.546 00:28:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:06.546 * Looking for test storage... 00:20:06.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:06.546 00:28:52 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.546 00:28:52 -- nvmf/common.sh@7 -- # uname -s 00:20:06.546 00:28:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.546 00:28:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.546 00:28:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.546 00:28:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.546 00:28:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.546 00:28:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.546 00:28:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.546 00:28:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.546 00:28:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.546 00:28:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.546 00:28:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:20:06.546 00:28:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:20:06.546 00:28:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.546 00:28:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.546 00:28:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.546 00:28:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.546 00:28:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.546 00:28:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.546 00:28:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.546 00:28:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.546 00:28:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.546 00:28:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.546 00:28:52 -- paths/export.sh@5 -- # export PATH 00:20:06.547 00:28:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.547 00:28:52 -- nvmf/common.sh@46 -- # : 0 00:20:06.547 00:28:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:06.547 00:28:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:06.547 00:28:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:06.547 00:28:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.547 00:28:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.547 00:28:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:06.547 00:28:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:06.547 00:28:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:06.547 00:28:52 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:06.547 00:28:52 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:06.547 00:28:52 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:06.547 00:28:52 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:06.547 00:28:52 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.547 00:28:52 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:06.547 00:28:52 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:06.547 00:28:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:06.547 00:28:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.547 00:28:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:06.547 00:28:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:06.547 00:28:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:06.547 00:28:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.547 00:28:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.547 00:28:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.547 00:28:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:06.547 00:28:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:06.547 00:28:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:06.547 00:28:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:06.547 00:28:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:06.547 00:28:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:06.547 00:28:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.547 00:28:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.547 00:28:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:06.547 00:28:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:06.547 00:28:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:06.547 00:28:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:06.547 00:28:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:06.547 00:28:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.547 00:28:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:06.547 00:28:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:06.547 00:28:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:06.547 00:28:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:06.547 00:28:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:06.547 00:28:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:06.547 Cannot find device "nvmf_tgt_br" 00:20:06.547 00:28:52 -- nvmf/common.sh@154 -- # true 00:20:06.547 00:28:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.547 Cannot find device "nvmf_tgt_br2" 00:20:06.547 00:28:52 -- nvmf/common.sh@155 -- # true 00:20:06.547 00:28:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:06.547 00:28:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:06.547 Cannot find device "nvmf_tgt_br" 00:20:06.547 00:28:52 -- nvmf/common.sh@157 -- # true 00:20:06.547 00:28:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:06.547 Cannot find device "nvmf_tgt_br2" 00:20:06.547 00:28:52 -- nvmf/common.sh@158 -- # true 00:20:06.547 00:28:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:06.547 00:28:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:06.547 00:28:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.547 00:28:52 -- nvmf/common.sh@161 -- # true 00:20:06.547 00:28:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.547 00:28:52 -- nvmf/common.sh@162 -- # true 00:20:06.547 00:28:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:06.547 00:28:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:06.547 00:28:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:06.547 00:28:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:06.547 00:28:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:06.547 00:28:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:06.547 00:28:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:06.547 00:28:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:06.547 00:28:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:06.547 00:28:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:06.547 00:28:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:06.547 00:28:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:06.547 00:28:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:06.547 00:28:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:06.547 00:28:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:06.547 00:28:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:06.547 00:28:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:06.547 00:28:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:06.547 00:28:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:06.547 00:28:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:06.547 00:28:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:06.547 00:28:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:06.547 00:28:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:06.547 00:28:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:06.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:20:06.547 00:20:06.547 --- 10.0.0.2 ping statistics --- 00:20:06.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.547 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:20:06.547 00:28:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:06.547 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:06.547 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:20:06.547 00:20:06.547 --- 10.0.0.3 ping statistics --- 00:20:06.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.547 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:06.547 00:28:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:06.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:06.547 00:20:06.547 --- 10.0.0.1 ping statistics --- 00:20:06.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.547 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:06.547 00:28:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.547 00:28:52 -- nvmf/common.sh@421 -- # return 0 00:20:06.547 00:28:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:06.547 00:28:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.547 00:28:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:06.547 00:28:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:06.547 00:28:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.547 00:28:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:06.547 00:28:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:06.547 00:28:52 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:06.547 00:28:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:06.547 00:28:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:06.547 00:28:52 -- common/autotest_common.sh@10 -- # set +x 00:20:06.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.547 00:28:52 -- nvmf/common.sh@469 -- # nvmfpid=92165 00:20:06.547 00:28:52 -- nvmf/common.sh@470 -- # waitforlisten 92165 00:20:06.547 00:28:52 -- common/autotest_common.sh@819 -- # '[' -z 92165 ']' 00:20:06.547 00:28:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:06.547 00:28:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.547 00:28:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:06.547 00:28:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.547 00:28:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:06.547 00:28:52 -- common/autotest_common.sh@10 -- # set +x 00:20:06.547 [2024-07-13 00:28:52.584151] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:06.547 [2024-07-13 00:28:52.584212] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.547 [2024-07-13 00:28:52.721954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:06.547 [2024-07-13 00:28:52.819489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:06.547 [2024-07-13 00:28:52.820001] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.548 [2024-07-13 00:28:52.820161] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.548 [2024-07-13 00:28:52.820394] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.548 [2024-07-13 00:28:52.821037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.548 [2024-07-13 00:28:52.821224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.548 [2024-07-13 00:28:52.821210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.548 00:28:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:06.548 00:28:53 -- common/autotest_common.sh@852 -- # return 0 00:20:06.548 00:28:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:06.548 00:28:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 00:28:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.548 00:28:53 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:06.548 00:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 [2024-07-13 00:28:53.580099] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.548 00:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.548 00:28:53 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:06.548 00:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 Malloc0 00:20:06.548 00:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.548 00:28:53 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:06.548 00:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 00:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.548 00:28:53 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:06.548 00:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 00:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.548 00:28:53 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.548 00:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 [2024-07-13 00:28:53.638521] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.548 00:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.548 00:28:53 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:06.548 00:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 [2024-07-13 00:28:53.646422] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:06.548 00:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.548 00:28:53 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:06.548 00:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 Malloc1 00:20:06.548 00:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.548 00:28:53 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:06.548 00:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 00:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.548 00:28:53 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:06.548 00:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 00:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.548 00:28:53 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:06.548 00:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 00:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.548 00:28:53 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:06.548 00:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:06.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.548 00:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.548 00:28:53 -- host/multicontroller.sh@44 -- # bdevperf_pid=92217 00:20:06.548 00:28:53 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:06.548 00:28:53 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:06.548 00:28:53 -- host/multicontroller.sh@47 -- # waitforlisten 92217 /var/tmp/bdevperf.sock 00:20:06.548 00:28:53 -- common/autotest_common.sh@819 -- # '[' -z 92217 ']' 00:20:06.548 00:28:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.548 00:28:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:06.548 00:28:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.548 00:28:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:06.548 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:20:07.925 00:28:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:07.925 00:28:54 -- common/autotest_common.sh@852 -- # return 0 00:20:07.925 00:28:54 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:07.925 00:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.925 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:20:07.925 NVMe0n1 00:20:07.925 00:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.925 00:28:54 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:07.925 00:28:54 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:07.925 00:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.925 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:20:07.925 00:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.925 1 00:20:07.925 00:28:54 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:07.925 00:28:54 -- common/autotest_common.sh@640 -- # local es=0 00:20:07.925 00:28:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:07.925 00:28:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:07.925 00:28:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.925 00:28:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:07.925 00:28:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.925 00:28:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:07.925 00:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.925 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:20:07.925 2024/07/13 00:28:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:07.925 request: 00:20:07.925 { 00:20:07.925 "method": "bdev_nvme_attach_controller", 00:20:07.925 "params": { 00:20:07.925 "name": "NVMe0", 00:20:07.925 "trtype": "tcp", 00:20:07.925 "traddr": "10.0.0.2", 00:20:07.925 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:07.925 "hostaddr": "10.0.0.2", 00:20:07.925 "hostsvcid": "60000", 00:20:07.925 "adrfam": "ipv4", 00:20:07.925 "trsvcid": "4420", 00:20:07.925 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:07.925 } 00:20:07.925 } 00:20:07.925 Got JSON-RPC error response 00:20:07.925 GoRPCClient: error on JSON-RPC call 00:20:07.925 00:28:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:07.925 00:28:54 -- common/autotest_common.sh@643 -- # es=1 00:20:07.925 00:28:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:07.925 00:28:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:07.925 00:28:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:07.925 00:28:54 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:07.925 00:28:54 -- common/autotest_common.sh@640 -- # local es=0 00:20:07.925 00:28:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:07.925 00:28:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:07.925 00:28:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.925 00:28:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:07.925 00:28:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.926 00:28:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:07.926 00:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.926 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:20:07.926 2024/07/13 00:28:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:07.926 request: 00:20:07.926 { 00:20:07.926 "method": "bdev_nvme_attach_controller", 00:20:07.926 "params": { 00:20:07.926 "name": "NVMe0", 00:20:07.926 "trtype": "tcp", 00:20:07.926 "traddr": "10.0.0.2", 00:20:07.926 "hostaddr": "10.0.0.2", 00:20:07.926 "hostsvcid": "60000", 00:20:07.926 "adrfam": "ipv4", 00:20:07.926 "trsvcid": "4420", 00:20:07.926 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:07.926 } 00:20:07.926 } 00:20:07.926 Got JSON-RPC error response 00:20:07.926 GoRPCClient: error on JSON-RPC call 00:20:07.926 00:28:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:07.926 00:28:54 -- common/autotest_common.sh@643 -- # es=1 00:20:07.926 00:28:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:07.926 00:28:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:07.926 00:28:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:07.926 00:28:54 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:07.926 00:28:54 -- common/autotest_common.sh@640 -- # local es=0 00:20:07.926 00:28:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:07.926 00:28:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:07.926 00:28:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.926 00:28:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:07.926 00:28:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.926 00:28:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:07.926 00:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.926 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:20:07.926 2024/07/13 00:28:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:07.926 request: 00:20:07.926 { 00:20:07.926 "method": "bdev_nvme_attach_controller", 00:20:07.926 "params": { 00:20:07.926 "name": "NVMe0", 00:20:07.926 "trtype": "tcp", 00:20:07.926 "traddr": "10.0.0.2", 00:20:07.926 "hostaddr": "10.0.0.2", 00:20:07.926 "hostsvcid": "60000", 00:20:07.926 "adrfam": "ipv4", 00:20:07.926 "trsvcid": "4420", 00:20:07.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.926 "multipath": "disable" 00:20:07.926 } 00:20:07.926 } 00:20:07.926 Got JSON-RPC error response 00:20:07.926 GoRPCClient: error on JSON-RPC call 00:20:07.926 00:28:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:07.926 00:28:54 -- common/autotest_common.sh@643 -- # es=1 00:20:07.926 00:28:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:07.926 00:28:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:07.926 00:28:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:07.926 00:28:54 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:07.926 00:28:54 -- common/autotest_common.sh@640 -- # local es=0 00:20:07.926 00:28:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:07.926 00:28:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:07.926 00:28:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.926 00:28:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:07.926 00:28:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:07.926 00:28:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:07.926 00:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.926 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:20:07.926 2024/07/13 00:28:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:07.926 request: 00:20:07.926 { 00:20:07.926 "method": "bdev_nvme_attach_controller", 00:20:07.926 "params": { 00:20:07.926 "name": "NVMe0", 00:20:07.926 "trtype": "tcp", 00:20:07.926 "traddr": "10.0.0.2", 00:20:07.926 "hostaddr": "10.0.0.2", 00:20:07.926 "hostsvcid": "60000", 00:20:07.926 "adrfam": "ipv4", 00:20:07.926 "trsvcid": "4420", 00:20:07.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.926 "multipath": "failover" 00:20:07.926 } 00:20:07.926 } 00:20:07.926 Got JSON-RPC error response 00:20:07.926 GoRPCClient: error on JSON-RPC call 00:20:07.926 00:28:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:07.926 00:28:54 -- common/autotest_common.sh@643 -- # es=1 00:20:07.926 00:28:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:07.926 00:28:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:07.926 00:28:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:07.926 00:28:54 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:07.926 00:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.926 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:20:07.926 00:20:07.926 00:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.926 00:28:54 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:07.926 00:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.926 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:20:07.926 00:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.926 00:28:54 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:07.926 00:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.926 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:20:07.926 00:20:07.926 00:28:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.926 00:28:55 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:07.926 00:28:55 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:07.926 00:28:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.926 00:28:55 -- common/autotest_common.sh@10 -- # set +x 00:20:07.926 00:28:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.926 00:28:55 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:07.926 00:28:55 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.352 0 00:20:09.352 00:28:56 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:09.352 00:28:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:09.352 00:28:56 -- common/autotest_common.sh@10 -- # set +x 00:20:09.352 00:28:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:09.352 00:28:56 -- host/multicontroller.sh@100 -- # killprocess 92217 00:20:09.352 00:28:56 -- common/autotest_common.sh@926 -- # '[' -z 92217 ']' 00:20:09.352 00:28:56 -- common/autotest_common.sh@930 -- # kill -0 92217 00:20:09.352 00:28:56 -- common/autotest_common.sh@931 -- # uname 00:20:09.352 00:28:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:09.352 00:28:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92217 00:20:09.352 killing process with pid 92217 00:20:09.352 00:28:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:09.352 00:28:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:09.352 00:28:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92217' 00:20:09.352 00:28:56 -- common/autotest_common.sh@945 -- # kill 92217 00:20:09.352 00:28:56 -- common/autotest_common.sh@950 -- # wait 92217 00:20:09.352 00:28:56 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:09.352 00:28:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:09.352 00:28:56 -- common/autotest_common.sh@10 -- # set +x 00:20:09.352 00:28:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:09.352 00:28:56 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:09.352 00:28:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:09.352 00:28:56 -- common/autotest_common.sh@10 -- # set +x 00:20:09.352 00:28:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:09.352 00:28:56 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:09.352 00:28:56 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:09.352 00:28:56 -- common/autotest_common.sh@1597 -- # read -r file 00:20:09.352 00:28:56 -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:09.352 00:28:56 -- common/autotest_common.sh@1596 -- # sort -u 00:20:09.352 00:28:56 -- common/autotest_common.sh@1598 -- # cat 00:20:09.352 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:09.352 [2024-07-13 00:28:53.763039] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:09.352 [2024-07-13 00:28:53.763183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92217 ] 00:20:09.352 [2024-07-13 00:28:53.906361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.352 [2024-07-13 00:28:54.009152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.352 [2024-07-13 00:28:55.024737] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 57a4da14-dacc-4677-bc2d-8a5cbc2c5c17 already exists 00:20:09.352 [2024-07-13 00:28:55.024800] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:57a4da14-dacc-4677-bc2d-8a5cbc2c5c17 alias for bdev NVMe1n1 00:20:09.352 [2024-07-13 00:28:55.024819] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:09.352 Running I/O for 1 seconds... 00:20:09.352 00:20:09.352 Latency(us) 00:20:09.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.352 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:09.352 NVMe0n1 : 1.01 22138.13 86.48 0.00 0.00 5767.28 2368.23 11021.96 00:20:09.352 =================================================================================================================== 00:20:09.352 Total : 22138.13 86.48 0.00 0.00 5767.28 2368.23 11021.96 00:20:09.352 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.352 00:20:09.352 Latency(us) 00:20:09.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.352 =================================================================================================================== 00:20:09.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.352 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:09.352 00:28:56 -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:09.352 00:28:56 -- common/autotest_common.sh@1597 -- # read -r file 00:20:09.352 00:28:56 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:09.352 00:28:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:09.353 00:28:56 -- nvmf/common.sh@116 -- # sync 00:20:09.353 00:28:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:09.353 00:28:56 -- nvmf/common.sh@119 -- # set +e 00:20:09.353 00:28:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:09.353 00:28:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:09.353 rmmod nvme_tcp 00:20:09.353 rmmod nvme_fabrics 00:20:09.610 rmmod nvme_keyring 00:20:09.611 00:28:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:09.611 00:28:56 -- nvmf/common.sh@123 -- # set -e 00:20:09.611 00:28:56 -- nvmf/common.sh@124 -- # return 0 00:20:09.611 00:28:56 -- nvmf/common.sh@477 -- # '[' -n 92165 ']' 00:20:09.611 00:28:56 -- nvmf/common.sh@478 -- # killprocess 92165 00:20:09.611 00:28:56 -- common/autotest_common.sh@926 -- # '[' -z 92165 ']' 00:20:09.611 00:28:56 -- common/autotest_common.sh@930 -- # kill -0 92165 00:20:09.611 00:28:56 -- common/autotest_common.sh@931 -- # uname 00:20:09.611 00:28:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:09.611 00:28:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92165 00:20:09.611 killing process with pid 92165 00:20:09.611 00:28:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:09.611 00:28:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:09.611 00:28:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92165' 00:20:09.611 00:28:56 -- common/autotest_common.sh@945 -- # kill 92165 00:20:09.611 00:28:56 -- common/autotest_common.sh@950 -- # wait 92165 00:20:09.868 00:28:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:09.868 00:28:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:09.868 00:28:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:09.868 00:28:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.868 00:28:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:09.868 00:28:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.868 00:28:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.868 00:28:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.868 00:28:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:09.868 00:20:09.868 real 0m4.996s 00:20:09.868 user 0m15.473s 00:20:09.868 sys 0m1.135s 00:20:09.868 00:28:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.868 ************************************ 00:20:09.868 END TEST nvmf_multicontroller 00:20:09.868 ************************************ 00:20:09.868 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:20:10.127 00:28:57 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:10.127 00:28:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:10.127 00:28:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:10.127 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:20:10.127 ************************************ 00:20:10.127 START TEST nvmf_aer 00:20:10.127 ************************************ 00:20:10.127 00:28:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:10.127 * Looking for test storage... 00:20:10.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:10.127 00:28:57 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:10.127 00:28:57 -- nvmf/common.sh@7 -- # uname -s 00:20:10.127 00:28:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.127 00:28:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.127 00:28:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.127 00:28:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.127 00:28:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.127 00:28:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.127 00:28:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.127 00:28:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.127 00:28:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.127 00:28:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.127 00:28:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:20:10.127 00:28:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:20:10.127 00:28:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.127 00:28:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.127 00:28:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:10.127 00:28:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:10.127 00:28:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.127 00:28:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.127 00:28:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.127 00:28:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.128 00:28:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.128 00:28:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.128 00:28:57 -- paths/export.sh@5 -- # export PATH 00:20:10.128 00:28:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.128 00:28:57 -- nvmf/common.sh@46 -- # : 0 00:20:10.128 00:28:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:10.128 00:28:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:10.128 00:28:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:10.128 00:28:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.128 00:28:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.128 00:28:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:10.128 00:28:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:10.128 00:28:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:10.128 00:28:57 -- host/aer.sh@11 -- # nvmftestinit 00:20:10.128 00:28:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:10.128 00:28:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.128 00:28:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:10.128 00:28:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:10.128 00:28:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:10.128 00:28:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.128 00:28:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.128 00:28:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.128 00:28:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:10.128 00:28:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:10.128 00:28:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:10.128 00:28:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:10.128 00:28:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:10.128 00:28:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:10.128 00:28:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.128 00:28:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.128 00:28:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:10.128 00:28:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:10.128 00:28:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:10.128 00:28:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:10.128 00:28:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:10.128 00:28:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.128 00:28:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:10.128 00:28:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:10.128 00:28:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:10.128 00:28:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:10.128 00:28:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:10.128 00:28:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:10.128 Cannot find device "nvmf_tgt_br" 00:20:10.128 00:28:57 -- nvmf/common.sh@154 -- # true 00:20:10.128 00:28:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:10.128 Cannot find device "nvmf_tgt_br2" 00:20:10.128 00:28:57 -- nvmf/common.sh@155 -- # true 00:20:10.128 00:28:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:10.128 00:28:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:10.128 Cannot find device "nvmf_tgt_br" 00:20:10.128 00:28:57 -- nvmf/common.sh@157 -- # true 00:20:10.128 00:28:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:10.128 Cannot find device "nvmf_tgt_br2" 00:20:10.128 00:28:57 -- nvmf/common.sh@158 -- # true 00:20:10.128 00:28:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:10.128 00:28:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:10.128 00:28:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:10.128 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.128 00:28:57 -- nvmf/common.sh@161 -- # true 00:20:10.128 00:28:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:10.128 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.128 00:28:57 -- nvmf/common.sh@162 -- # true 00:20:10.128 00:28:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:10.387 00:28:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:10.387 00:28:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:10.387 00:28:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:10.387 00:28:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:10.387 00:28:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:10.387 00:28:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:10.387 00:28:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:10.387 00:28:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:10.387 00:28:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:10.387 00:28:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:10.387 00:28:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:10.387 00:28:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:10.387 00:28:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:10.387 00:28:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:10.387 00:28:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:10.387 00:28:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:10.387 00:28:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:10.387 00:28:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:10.387 00:28:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:10.387 00:28:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:10.387 00:28:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:10.387 00:28:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:10.387 00:28:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:10.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:20:10.387 00:20:10.387 --- 10.0.0.2 ping statistics --- 00:20:10.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.387 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:10.387 00:28:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:10.387 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:10.387 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:20:10.387 00:20:10.387 --- 10.0.0.3 ping statistics --- 00:20:10.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.387 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:10.387 00:28:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:10.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:10.387 00:20:10.387 --- 10.0.0.1 ping statistics --- 00:20:10.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.387 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:10.387 00:28:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.387 00:28:57 -- nvmf/common.sh@421 -- # return 0 00:20:10.387 00:28:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:10.387 00:28:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.387 00:28:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:10.387 00:28:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:10.387 00:28:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.387 00:28:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:10.387 00:28:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:10.387 00:28:57 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:10.387 00:28:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:10.387 00:28:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:10.387 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:20:10.387 00:28:57 -- nvmf/common.sh@469 -- # nvmfpid=92471 00:20:10.387 00:28:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:10.387 00:28:57 -- nvmf/common.sh@470 -- # waitforlisten 92471 00:20:10.387 00:28:57 -- common/autotest_common.sh@819 -- # '[' -z 92471 ']' 00:20:10.387 00:28:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.387 00:28:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:10.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.387 00:28:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.387 00:28:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:10.387 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:20:10.646 [2024-07-13 00:28:57.643004] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:10.646 [2024-07-13 00:28:57.643117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.646 [2024-07-13 00:28:57.785803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.905 [2024-07-13 00:28:57.880774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:10.905 [2024-07-13 00:28:57.880983] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.905 [2024-07-13 00:28:57.881001] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.905 [2024-07-13 00:28:57.881013] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.905 [2024-07-13 00:28:57.881215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.905 [2024-07-13 00:28:57.881356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.905 [2024-07-13 00:28:57.881513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.905 [2024-07-13 00:28:57.881513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.473 00:28:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:11.473 00:28:58 -- common/autotest_common.sh@852 -- # return 0 00:20:11.473 00:28:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:11.473 00:28:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:11.473 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:20:11.473 00:28:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.473 00:28:58 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.473 00:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.473 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:20:11.473 [2024-07-13 00:28:58.699309] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.732 00:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.732 00:28:58 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:11.732 00:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.732 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:20:11.732 Malloc0 00:20:11.732 00:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.732 00:28:58 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:11.732 00:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.732 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:20:11.732 00:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.732 00:28:58 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.732 00:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.732 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:20:11.732 00:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.732 00:28:58 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.732 00:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.732 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:20:11.732 [2024-07-13 00:28:58.773545] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.732 00:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.732 00:28:58 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:11.732 00:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.732 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:20:11.732 [2024-07-13 00:28:58.781201] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:11.732 [ 00:20:11.732 { 00:20:11.732 "allow_any_host": true, 00:20:11.732 "hosts": [], 00:20:11.732 "listen_addresses": [], 00:20:11.732 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:11.732 "subtype": "Discovery" 00:20:11.732 }, 00:20:11.732 { 00:20:11.732 "allow_any_host": true, 00:20:11.732 "hosts": [], 00:20:11.732 "listen_addresses": [ 00:20:11.732 { 00:20:11.732 "adrfam": "IPv4", 00:20:11.732 "traddr": "10.0.0.2", 00:20:11.732 "transport": "TCP", 00:20:11.732 "trsvcid": "4420", 00:20:11.732 "trtype": "TCP" 00:20:11.732 } 00:20:11.732 ], 00:20:11.732 "max_cntlid": 65519, 00:20:11.732 "max_namespaces": 2, 00:20:11.732 "min_cntlid": 1, 00:20:11.732 "model_number": "SPDK bdev Controller", 00:20:11.732 "namespaces": [ 00:20:11.732 { 00:20:11.732 "bdev_name": "Malloc0", 00:20:11.732 "name": "Malloc0", 00:20:11.732 "nguid": "B4F55E4AC080460F84CF837E3C245978", 00:20:11.732 "nsid": 1, 00:20:11.732 "uuid": "b4f55e4a-c080-460f-84cf-837e3c245978" 00:20:11.732 } 00:20:11.732 ], 00:20:11.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.732 "serial_number": "SPDK00000000000001", 00:20:11.732 "subtype": "NVMe" 00:20:11.732 } 00:20:11.732 ] 00:20:11.732 00:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.732 00:28:58 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:11.732 00:28:58 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:11.732 00:28:58 -- host/aer.sh@33 -- # aerpid=92525 00:20:11.732 00:28:58 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:11.732 00:28:58 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:11.732 00:28:58 -- common/autotest_common.sh@1244 -- # local i=0 00:20:11.732 00:28:58 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.732 00:28:58 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:20:11.732 00:28:58 -- common/autotest_common.sh@1247 -- # i=1 00:20:11.732 00:28:58 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:20:11.732 00:28:58 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.732 00:28:58 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:20:11.732 00:28:58 -- common/autotest_common.sh@1247 -- # i=2 00:20:11.732 00:28:58 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:20:11.991 00:28:59 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.991 00:28:59 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:11.991 00:28:59 -- common/autotest_common.sh@1255 -- # return 0 00:20:11.991 00:28:59 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:11.991 00:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.991 00:28:59 -- common/autotest_common.sh@10 -- # set +x 00:20:11.991 Malloc1 00:20:11.991 00:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.991 00:28:59 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:11.991 00:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.991 00:28:59 -- common/autotest_common.sh@10 -- # set +x 00:20:11.991 00:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.991 00:28:59 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:11.991 00:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.991 00:28:59 -- common/autotest_common.sh@10 -- # set +x 00:20:11.991 Asynchronous Event Request test 00:20:11.991 Attaching to 10.0.0.2 00:20:11.991 Attached to 10.0.0.2 00:20:11.991 Registering asynchronous event callbacks... 00:20:11.991 Starting namespace attribute notice tests for all controllers... 00:20:11.991 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:11.991 aer_cb - Changed Namespace 00:20:11.991 Cleaning up... 00:20:11.991 [ 00:20:11.991 { 00:20:11.991 "allow_any_host": true, 00:20:11.991 "hosts": [], 00:20:11.991 "listen_addresses": [], 00:20:11.991 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:11.991 "subtype": "Discovery" 00:20:11.991 }, 00:20:11.991 { 00:20:11.991 "allow_any_host": true, 00:20:11.991 "hosts": [], 00:20:11.991 "listen_addresses": [ 00:20:11.991 { 00:20:11.991 "adrfam": "IPv4", 00:20:11.991 "traddr": "10.0.0.2", 00:20:11.991 "transport": "TCP", 00:20:11.991 "trsvcid": "4420", 00:20:11.991 "trtype": "TCP" 00:20:11.991 } 00:20:11.991 ], 00:20:11.991 "max_cntlid": 65519, 00:20:11.991 "max_namespaces": 2, 00:20:11.991 "min_cntlid": 1, 00:20:11.991 "model_number": "SPDK bdev Controller", 00:20:11.991 "namespaces": [ 00:20:11.991 { 00:20:11.991 "bdev_name": "Malloc0", 00:20:11.991 "name": "Malloc0", 00:20:11.991 "nguid": "B4F55E4AC080460F84CF837E3C245978", 00:20:11.991 "nsid": 1, 00:20:11.991 "uuid": "b4f55e4a-c080-460f-84cf-837e3c245978" 00:20:11.991 }, 00:20:11.991 { 00:20:11.991 "bdev_name": "Malloc1", 00:20:11.991 "name": "Malloc1", 00:20:11.991 "nguid": "08473C9CF1DD49C3AC6CE039CEC7BA07", 00:20:11.991 "nsid": 2, 00:20:11.991 "uuid": "08473c9c-f1dd-49c3-ac6c-e039cec7ba07" 00:20:11.991 } 00:20:11.991 ], 00:20:11.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.991 "serial_number": "SPDK00000000000001", 00:20:11.991 "subtype": "NVMe" 00:20:11.991 } 00:20:11.991 ] 00:20:11.991 00:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.991 00:28:59 -- host/aer.sh@43 -- # wait 92525 00:20:11.991 00:28:59 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:11.991 00:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.991 00:28:59 -- common/autotest_common.sh@10 -- # set +x 00:20:11.991 00:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.991 00:28:59 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:11.991 00:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.992 00:28:59 -- common/autotest_common.sh@10 -- # set +x 00:20:11.992 00:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.992 00:28:59 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.992 00:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:11.992 00:28:59 -- common/autotest_common.sh@10 -- # set +x 00:20:11.992 00:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:11.992 00:28:59 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:11.992 00:28:59 -- host/aer.sh@51 -- # nvmftestfini 00:20:11.992 00:28:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:11.992 00:28:59 -- nvmf/common.sh@116 -- # sync 00:20:12.251 00:28:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:12.251 00:28:59 -- nvmf/common.sh@119 -- # set +e 00:20:12.251 00:28:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:12.251 00:28:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:12.251 rmmod nvme_tcp 00:20:12.251 rmmod nvme_fabrics 00:20:12.251 rmmod nvme_keyring 00:20:12.251 00:28:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:12.251 00:28:59 -- nvmf/common.sh@123 -- # set -e 00:20:12.251 00:28:59 -- nvmf/common.sh@124 -- # return 0 00:20:12.251 00:28:59 -- nvmf/common.sh@477 -- # '[' -n 92471 ']' 00:20:12.251 00:28:59 -- nvmf/common.sh@478 -- # killprocess 92471 00:20:12.251 00:28:59 -- common/autotest_common.sh@926 -- # '[' -z 92471 ']' 00:20:12.251 00:28:59 -- common/autotest_common.sh@930 -- # kill -0 92471 00:20:12.251 00:28:59 -- common/autotest_common.sh@931 -- # uname 00:20:12.251 00:28:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:12.251 00:28:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92471 00:20:12.251 00:28:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:12.251 00:28:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:12.251 killing process with pid 92471 00:20:12.251 00:28:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92471' 00:20:12.251 00:28:59 -- common/autotest_common.sh@945 -- # kill 92471 00:20:12.251 [2024-07-13 00:28:59.348866] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:12.251 00:28:59 -- common/autotest_common.sh@950 -- # wait 92471 00:20:12.511 00:28:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:12.511 00:28:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:12.511 00:28:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:12.511 00:28:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.511 00:28:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:12.511 00:28:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.511 00:28:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.511 00:28:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.511 00:28:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:12.511 00:20:12.511 real 0m2.556s 00:20:12.511 user 0m7.195s 00:20:12.511 sys 0m0.664s 00:20:12.511 00:28:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.511 00:28:59 -- common/autotest_common.sh@10 -- # set +x 00:20:12.511 ************************************ 00:20:12.511 END TEST nvmf_aer 00:20:12.511 ************************************ 00:20:12.511 00:28:59 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:12.511 00:28:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:12.511 00:28:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:12.511 00:28:59 -- common/autotest_common.sh@10 -- # set +x 00:20:12.511 ************************************ 00:20:12.511 START TEST nvmf_async_init 00:20:12.511 ************************************ 00:20:12.511 00:28:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:12.770 * Looking for test storage... 00:20:12.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:12.770 00:28:59 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:12.770 00:28:59 -- nvmf/common.sh@7 -- # uname -s 00:20:12.770 00:28:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.770 00:28:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.770 00:28:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.770 00:28:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.770 00:28:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.770 00:28:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.770 00:28:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.770 00:28:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.770 00:28:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.770 00:28:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.771 00:28:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:20:12.771 00:28:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:20:12.771 00:28:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.771 00:28:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.771 00:28:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:12.771 00:28:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:12.771 00:28:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.771 00:28:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.771 00:28:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.771 00:28:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.771 00:28:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.771 00:28:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.771 00:28:59 -- paths/export.sh@5 -- # export PATH 00:20:12.771 00:28:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.771 00:28:59 -- nvmf/common.sh@46 -- # : 0 00:20:12.771 00:28:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:12.771 00:28:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:12.771 00:28:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:12.771 00:28:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.771 00:28:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.771 00:28:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:12.771 00:28:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:12.771 00:28:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:12.771 00:28:59 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:12.771 00:28:59 -- host/async_init.sh@14 -- # null_block_size=512 00:20:12.771 00:28:59 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:12.771 00:28:59 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:12.771 00:28:59 -- host/async_init.sh@20 -- # uuidgen 00:20:12.771 00:28:59 -- host/async_init.sh@20 -- # tr -d - 00:20:12.771 00:28:59 -- host/async_init.sh@20 -- # nguid=e2944dff722743cc857c4c53671614b8 00:20:12.771 00:28:59 -- host/async_init.sh@22 -- # nvmftestinit 00:20:12.771 00:28:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:12.771 00:28:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.771 00:28:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:12.771 00:28:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:12.771 00:28:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:12.771 00:28:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.771 00:28:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.771 00:28:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.771 00:28:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:12.771 00:28:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:12.771 00:28:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:12.771 00:28:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:12.771 00:28:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:12.771 00:28:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:12.771 00:28:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.771 00:28:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.771 00:28:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:12.771 00:28:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:12.771 00:28:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:12.771 00:28:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:12.771 00:28:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:12.771 00:28:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.771 00:28:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:12.771 00:28:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:12.771 00:28:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:12.771 00:28:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:12.771 00:28:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:12.771 00:28:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:12.771 Cannot find device "nvmf_tgt_br" 00:20:12.771 00:28:59 -- nvmf/common.sh@154 -- # true 00:20:12.771 00:28:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.771 Cannot find device "nvmf_tgt_br2" 00:20:12.771 00:28:59 -- nvmf/common.sh@155 -- # true 00:20:12.771 00:28:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:12.771 00:28:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:12.771 Cannot find device "nvmf_tgt_br" 00:20:12.771 00:28:59 -- nvmf/common.sh@157 -- # true 00:20:12.771 00:28:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:12.771 Cannot find device "nvmf_tgt_br2" 00:20:12.771 00:28:59 -- nvmf/common.sh@158 -- # true 00:20:12.771 00:28:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:12.771 00:28:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:12.771 00:28:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.771 00:28:59 -- nvmf/common.sh@161 -- # true 00:20:12.771 00:28:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.771 00:28:59 -- nvmf/common.sh@162 -- # true 00:20:12.771 00:28:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:13.030 00:29:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:13.030 00:29:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:13.030 00:29:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:13.030 00:29:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:13.030 00:29:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:13.030 00:29:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:13.030 00:29:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:13.030 00:29:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:13.030 00:29:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:13.030 00:29:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:13.030 00:29:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:13.030 00:29:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:13.030 00:29:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.030 00:29:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:13.030 00:29:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:13.030 00:29:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:13.030 00:29:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:13.030 00:29:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.031 00:29:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.031 00:29:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.031 00:29:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.031 00:29:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.031 00:29:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:13.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:20:13.031 00:20:13.031 --- 10.0.0.2 ping statistics --- 00:20:13.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.031 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:13.031 00:29:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:13.031 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.031 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:20:13.031 00:20:13.031 --- 10.0.0.3 ping statistics --- 00:20:13.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.031 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:13.031 00:29:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:13.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:20:13.031 00:20:13.031 --- 10.0.0.1 ping statistics --- 00:20:13.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.031 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:13.031 00:29:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.031 00:29:00 -- nvmf/common.sh@421 -- # return 0 00:20:13.031 00:29:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:13.031 00:29:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.031 00:29:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:13.031 00:29:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:13.031 00:29:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.031 00:29:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:13.031 00:29:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:13.031 00:29:00 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:13.031 00:29:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:13.031 00:29:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:13.031 00:29:00 -- common/autotest_common.sh@10 -- # set +x 00:20:13.031 00:29:00 -- nvmf/common.sh@469 -- # nvmfpid=92701 00:20:13.031 00:29:00 -- nvmf/common.sh@470 -- # waitforlisten 92701 00:20:13.031 00:29:00 -- common/autotest_common.sh@819 -- # '[' -z 92701 ']' 00:20:13.031 00:29:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.031 00:29:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:13.031 00:29:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:13.031 00:29:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.031 00:29:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:13.031 00:29:00 -- common/autotest_common.sh@10 -- # set +x 00:20:13.290 [2024-07-13 00:29:00.301217] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:13.290 [2024-07-13 00:29:00.301886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.290 [2024-07-13 00:29:00.437914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.550 [2024-07-13 00:29:00.565089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:13.550 [2024-07-13 00:29:00.565231] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.550 [2024-07-13 00:29:00.565244] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.550 [2024-07-13 00:29:00.565252] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.550 [2024-07-13 00:29:00.565286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.118 00:29:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:14.118 00:29:01 -- common/autotest_common.sh@852 -- # return 0 00:20:14.118 00:29:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:14.118 00:29:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:14.118 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.118 00:29:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.118 00:29:01 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:14.118 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.118 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.118 [2024-07-13 00:29:01.314877] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.118 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.118 00:29:01 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:14.118 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.118 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.118 null0 00:20:14.118 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.118 00:29:01 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:14.118 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.118 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.118 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.118 00:29:01 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:14.118 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.118 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.378 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.378 00:29:01 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e2944dff722743cc857c4c53671614b8 00:20:14.378 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.378 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.378 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.378 00:29:01 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:14.378 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.378 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.378 [2024-07-13 00:29:01.363014] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.378 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.378 00:29:01 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:14.378 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.378 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.378 nvme0n1 00:20:14.378 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.378 00:29:01 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:14.378 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.378 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.637 [ 00:20:14.637 { 00:20:14.637 "aliases": [ 00:20:14.637 "e2944dff-7227-43cc-857c-4c53671614b8" 00:20:14.637 ], 00:20:14.637 "assigned_rate_limits": { 00:20:14.637 "r_mbytes_per_sec": 0, 00:20:14.637 "rw_ios_per_sec": 0, 00:20:14.637 "rw_mbytes_per_sec": 0, 00:20:14.637 "w_mbytes_per_sec": 0 00:20:14.637 }, 00:20:14.637 "block_size": 512, 00:20:14.637 "claimed": false, 00:20:14.637 "driver_specific": { 00:20:14.637 "mp_policy": "active_passive", 00:20:14.637 "nvme": [ 00:20:14.637 { 00:20:14.637 "ctrlr_data": { 00:20:14.637 "ana_reporting": false, 00:20:14.637 "cntlid": 1, 00:20:14.637 "firmware_revision": "24.01.1", 00:20:14.637 "model_number": "SPDK bdev Controller", 00:20:14.637 "multi_ctrlr": true, 00:20:14.637 "oacs": { 00:20:14.637 "firmware": 0, 00:20:14.637 "format": 0, 00:20:14.637 "ns_manage": 0, 00:20:14.637 "security": 0 00:20:14.637 }, 00:20:14.637 "serial_number": "00000000000000000000", 00:20:14.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.637 "vendor_id": "0x8086" 00:20:14.637 }, 00:20:14.637 "ns_data": { 00:20:14.637 "can_share": true, 00:20:14.637 "id": 1 00:20:14.637 }, 00:20:14.637 "trid": { 00:20:14.637 "adrfam": "IPv4", 00:20:14.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.637 "traddr": "10.0.0.2", 00:20:14.637 "trsvcid": "4420", 00:20:14.637 "trtype": "TCP" 00:20:14.637 }, 00:20:14.637 "vs": { 00:20:14.637 "nvme_version": "1.3" 00:20:14.637 } 00:20:14.637 } 00:20:14.637 ] 00:20:14.637 }, 00:20:14.637 "name": "nvme0n1", 00:20:14.637 "num_blocks": 2097152, 00:20:14.637 "product_name": "NVMe disk", 00:20:14.637 "supported_io_types": { 00:20:14.637 "abort": true, 00:20:14.637 "compare": true, 00:20:14.637 "compare_and_write": true, 00:20:14.637 "flush": true, 00:20:14.637 "nvme_admin": true, 00:20:14.637 "nvme_io": true, 00:20:14.637 "read": true, 00:20:14.637 "reset": true, 00:20:14.637 "unmap": false, 00:20:14.637 "write": true, 00:20:14.637 "write_zeroes": true 00:20:14.637 }, 00:20:14.637 "uuid": "e2944dff-7227-43cc-857c-4c53671614b8", 00:20:14.637 "zoned": false 00:20:14.637 } 00:20:14.637 ] 00:20:14.637 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.637 00:29:01 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:14.637 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.637 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.637 [2024-07-13 00:29:01.625068] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:14.637 [2024-07-13 00:29:01.625183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9ff0 (9): Bad file descriptor 00:20:14.637 [2024-07-13 00:29:01.756812] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:14.637 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.637 00:29:01 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:14.637 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.637 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.637 [ 00:20:14.637 { 00:20:14.637 "aliases": [ 00:20:14.637 "e2944dff-7227-43cc-857c-4c53671614b8" 00:20:14.637 ], 00:20:14.638 "assigned_rate_limits": { 00:20:14.638 "r_mbytes_per_sec": 0, 00:20:14.638 "rw_ios_per_sec": 0, 00:20:14.638 "rw_mbytes_per_sec": 0, 00:20:14.638 "w_mbytes_per_sec": 0 00:20:14.638 }, 00:20:14.638 "block_size": 512, 00:20:14.638 "claimed": false, 00:20:14.638 "driver_specific": { 00:20:14.638 "mp_policy": "active_passive", 00:20:14.638 "nvme": [ 00:20:14.638 { 00:20:14.638 "ctrlr_data": { 00:20:14.638 "ana_reporting": false, 00:20:14.638 "cntlid": 2, 00:20:14.638 "firmware_revision": "24.01.1", 00:20:14.638 "model_number": "SPDK bdev Controller", 00:20:14.638 "multi_ctrlr": true, 00:20:14.638 "oacs": { 00:20:14.638 "firmware": 0, 00:20:14.638 "format": 0, 00:20:14.638 "ns_manage": 0, 00:20:14.638 "security": 0 00:20:14.638 }, 00:20:14.638 "serial_number": "00000000000000000000", 00:20:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.638 "vendor_id": "0x8086" 00:20:14.638 }, 00:20:14.638 "ns_data": { 00:20:14.638 "can_share": true, 00:20:14.638 "id": 1 00:20:14.638 }, 00:20:14.638 "trid": { 00:20:14.638 "adrfam": "IPv4", 00:20:14.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.638 "traddr": "10.0.0.2", 00:20:14.638 "trsvcid": "4420", 00:20:14.638 "trtype": "TCP" 00:20:14.638 }, 00:20:14.638 "vs": { 00:20:14.638 "nvme_version": "1.3" 00:20:14.638 } 00:20:14.638 } 00:20:14.638 ] 00:20:14.638 }, 00:20:14.638 "name": "nvme0n1", 00:20:14.638 "num_blocks": 2097152, 00:20:14.638 "product_name": "NVMe disk", 00:20:14.638 "supported_io_types": { 00:20:14.638 "abort": true, 00:20:14.638 "compare": true, 00:20:14.638 "compare_and_write": true, 00:20:14.638 "flush": true, 00:20:14.638 "nvme_admin": true, 00:20:14.638 "nvme_io": true, 00:20:14.638 "read": true, 00:20:14.638 "reset": true, 00:20:14.638 "unmap": false, 00:20:14.638 "write": true, 00:20:14.638 "write_zeroes": true 00:20:14.638 }, 00:20:14.638 "uuid": "e2944dff-7227-43cc-857c-4c53671614b8", 00:20:14.638 "zoned": false 00:20:14.638 } 00:20:14.638 ] 00:20:14.638 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.638 00:29:01 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.638 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.638 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.638 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.638 00:29:01 -- host/async_init.sh@53 -- # mktemp 00:20:14.638 00:29:01 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.LlY9a2cMNN 00:20:14.638 00:29:01 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:14.638 00:29:01 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.LlY9a2cMNN 00:20:14.638 00:29:01 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:14.638 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.638 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.638 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.638 00:29:01 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:14.638 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.638 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.638 [2024-07-13 00:29:01.825317] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.638 [2024-07-13 00:29:01.825499] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:14.638 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.638 00:29:01 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LlY9a2cMNN 00:20:14.638 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.638 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.638 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.638 00:29:01 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LlY9a2cMNN 00:20:14.638 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.638 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.638 [2024-07-13 00:29:01.841312] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.897 nvme0n1 00:20:14.897 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.897 00:29:01 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:14.897 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.897 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.897 [ 00:20:14.897 { 00:20:14.897 "aliases": [ 00:20:14.897 "e2944dff-7227-43cc-857c-4c53671614b8" 00:20:14.897 ], 00:20:14.897 "assigned_rate_limits": { 00:20:14.897 "r_mbytes_per_sec": 0, 00:20:14.897 "rw_ios_per_sec": 0, 00:20:14.897 "rw_mbytes_per_sec": 0, 00:20:14.897 "w_mbytes_per_sec": 0 00:20:14.897 }, 00:20:14.897 "block_size": 512, 00:20:14.897 "claimed": false, 00:20:14.897 "driver_specific": { 00:20:14.897 "mp_policy": "active_passive", 00:20:14.897 "nvme": [ 00:20:14.897 { 00:20:14.897 "ctrlr_data": { 00:20:14.897 "ana_reporting": false, 00:20:14.897 "cntlid": 3, 00:20:14.897 "firmware_revision": "24.01.1", 00:20:14.897 "model_number": "SPDK bdev Controller", 00:20:14.897 "multi_ctrlr": true, 00:20:14.897 "oacs": { 00:20:14.897 "firmware": 0, 00:20:14.897 "format": 0, 00:20:14.897 "ns_manage": 0, 00:20:14.897 "security": 0 00:20:14.897 }, 00:20:14.897 "serial_number": "00000000000000000000", 00:20:14.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.897 "vendor_id": "0x8086" 00:20:14.897 }, 00:20:14.897 "ns_data": { 00:20:14.897 "can_share": true, 00:20:14.897 "id": 1 00:20:14.897 }, 00:20:14.897 "trid": { 00:20:14.897 "adrfam": "IPv4", 00:20:14.897 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:14.897 "traddr": "10.0.0.2", 00:20:14.897 "trsvcid": "4421", 00:20:14.897 "trtype": "TCP" 00:20:14.897 }, 00:20:14.897 "vs": { 00:20:14.897 "nvme_version": "1.3" 00:20:14.897 } 00:20:14.897 } 00:20:14.897 ] 00:20:14.897 }, 00:20:14.897 "name": "nvme0n1", 00:20:14.897 "num_blocks": 2097152, 00:20:14.897 "product_name": "NVMe disk", 00:20:14.897 "supported_io_types": { 00:20:14.897 "abort": true, 00:20:14.897 "compare": true, 00:20:14.897 "compare_and_write": true, 00:20:14.897 "flush": true, 00:20:14.897 "nvme_admin": true, 00:20:14.897 "nvme_io": true, 00:20:14.897 "read": true, 00:20:14.897 "reset": true, 00:20:14.897 "unmap": false, 00:20:14.897 "write": true, 00:20:14.897 "write_zeroes": true 00:20:14.897 }, 00:20:14.897 "uuid": "e2944dff-7227-43cc-857c-4c53671614b8", 00:20:14.897 "zoned": false 00:20:14.897 } 00:20:14.897 ] 00:20:14.897 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.897 00:29:01 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.897 00:29:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.897 00:29:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.897 00:29:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.897 00:29:01 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.LlY9a2cMNN 00:20:14.897 00:29:01 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:14.897 00:29:01 -- host/async_init.sh@78 -- # nvmftestfini 00:20:14.897 00:29:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:14.897 00:29:01 -- nvmf/common.sh@116 -- # sync 00:20:14.897 00:29:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:14.897 00:29:02 -- nvmf/common.sh@119 -- # set +e 00:20:14.897 00:29:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:14.897 00:29:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:14.897 rmmod nvme_tcp 00:20:14.897 rmmod nvme_fabrics 00:20:14.897 rmmod nvme_keyring 00:20:14.897 00:29:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:14.897 00:29:02 -- nvmf/common.sh@123 -- # set -e 00:20:14.897 00:29:02 -- nvmf/common.sh@124 -- # return 0 00:20:14.897 00:29:02 -- nvmf/common.sh@477 -- # '[' -n 92701 ']' 00:20:14.897 00:29:02 -- nvmf/common.sh@478 -- # killprocess 92701 00:20:14.897 00:29:02 -- common/autotest_common.sh@926 -- # '[' -z 92701 ']' 00:20:14.897 00:29:02 -- common/autotest_common.sh@930 -- # kill -0 92701 00:20:14.897 00:29:02 -- common/autotest_common.sh@931 -- # uname 00:20:14.897 00:29:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:14.897 00:29:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92701 00:20:14.897 00:29:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:14.897 00:29:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:14.897 killing process with pid 92701 00:20:14.897 00:29:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92701' 00:20:14.897 00:29:02 -- common/autotest_common.sh@945 -- # kill 92701 00:20:14.897 00:29:02 -- common/autotest_common.sh@950 -- # wait 92701 00:20:15.157 00:29:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:15.157 00:29:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:15.157 00:29:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:15.157 00:29:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.157 00:29:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:15.157 00:29:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.157 00:29:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.157 00:29:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.415 00:29:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:15.415 00:20:15.415 real 0m2.672s 00:20:15.415 user 0m2.390s 00:20:15.415 sys 0m0.681s 00:20:15.415 00:29:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.415 ************************************ 00:20:15.415 00:29:02 -- common/autotest_common.sh@10 -- # set +x 00:20:15.415 END TEST nvmf_async_init 00:20:15.415 ************************************ 00:20:15.415 00:29:02 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:15.415 00:29:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:15.415 00:29:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:15.415 00:29:02 -- common/autotest_common.sh@10 -- # set +x 00:20:15.415 ************************************ 00:20:15.415 START TEST dma 00:20:15.415 ************************************ 00:20:15.415 00:29:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:15.415 * Looking for test storage... 00:20:15.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:15.415 00:29:02 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.415 00:29:02 -- nvmf/common.sh@7 -- # uname -s 00:20:15.415 00:29:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.415 00:29:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.415 00:29:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.415 00:29:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.415 00:29:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.415 00:29:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.415 00:29:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.415 00:29:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.415 00:29:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.415 00:29:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.415 00:29:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:20:15.415 00:29:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:20:15.415 00:29:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.415 00:29:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.415 00:29:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.415 00:29:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.415 00:29:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.415 00:29:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.415 00:29:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.415 00:29:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.415 00:29:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.415 00:29:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.415 00:29:02 -- paths/export.sh@5 -- # export PATH 00:20:15.415 00:29:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.415 00:29:02 -- nvmf/common.sh@46 -- # : 0 00:20:15.415 00:29:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:15.415 00:29:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:15.415 00:29:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:15.415 00:29:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.415 00:29:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.415 00:29:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:15.415 00:29:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:15.415 00:29:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:15.415 00:29:02 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:15.415 00:29:02 -- host/dma.sh@13 -- # exit 0 00:20:15.415 00:20:15.415 real 0m0.104s 00:20:15.415 user 0m0.043s 00:20:15.415 sys 0m0.067s 00:20:15.415 00:29:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.415 ************************************ 00:20:15.415 END TEST dma 00:20:15.415 ************************************ 00:20:15.415 00:29:02 -- common/autotest_common.sh@10 -- # set +x 00:20:15.415 00:29:02 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:15.415 00:29:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:15.415 00:29:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:15.415 00:29:02 -- common/autotest_common.sh@10 -- # set +x 00:20:15.415 ************************************ 00:20:15.415 START TEST nvmf_identify 00:20:15.415 ************************************ 00:20:15.415 00:29:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:15.674 * Looking for test storage... 00:20:15.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:15.674 00:29:02 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.674 00:29:02 -- nvmf/common.sh@7 -- # uname -s 00:20:15.674 00:29:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.674 00:29:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.674 00:29:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.674 00:29:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.674 00:29:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.674 00:29:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.674 00:29:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.674 00:29:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.674 00:29:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.674 00:29:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.674 00:29:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:20:15.674 00:29:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:20:15.674 00:29:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.674 00:29:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.674 00:29:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.674 00:29:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.674 00:29:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.674 00:29:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.674 00:29:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.674 00:29:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.674 00:29:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.674 00:29:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.674 00:29:02 -- paths/export.sh@5 -- # export PATH 00:20:15.674 00:29:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.674 00:29:02 -- nvmf/common.sh@46 -- # : 0 00:20:15.674 00:29:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:15.674 00:29:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:15.674 00:29:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:15.674 00:29:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.674 00:29:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.674 00:29:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:15.674 00:29:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:15.674 00:29:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:15.674 00:29:02 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:15.674 00:29:02 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:15.674 00:29:02 -- host/identify.sh@14 -- # nvmftestinit 00:20:15.674 00:29:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:15.674 00:29:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.674 00:29:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:15.674 00:29:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:15.674 00:29:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:15.674 00:29:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.674 00:29:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.674 00:29:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.674 00:29:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:15.674 00:29:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:15.674 00:29:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:15.674 00:29:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:15.674 00:29:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:15.674 00:29:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:15.674 00:29:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.674 00:29:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.674 00:29:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:15.674 00:29:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:15.674 00:29:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:15.674 00:29:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:15.674 00:29:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:15.674 00:29:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.674 00:29:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:15.674 00:29:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:15.674 00:29:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:15.674 00:29:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:15.674 00:29:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:15.674 00:29:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:15.674 Cannot find device "nvmf_tgt_br" 00:20:15.674 00:29:02 -- nvmf/common.sh@154 -- # true 00:20:15.674 00:29:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.674 Cannot find device "nvmf_tgt_br2" 00:20:15.674 00:29:02 -- nvmf/common.sh@155 -- # true 00:20:15.674 00:29:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:15.674 00:29:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:15.674 Cannot find device "nvmf_tgt_br" 00:20:15.674 00:29:02 -- nvmf/common.sh@157 -- # true 00:20:15.674 00:29:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:15.674 Cannot find device "nvmf_tgt_br2" 00:20:15.674 00:29:02 -- nvmf/common.sh@158 -- # true 00:20:15.674 00:29:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:15.674 00:29:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:15.674 00:29:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.674 00:29:02 -- nvmf/common.sh@161 -- # true 00:20:15.674 00:29:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.674 00:29:02 -- nvmf/common.sh@162 -- # true 00:20:15.674 00:29:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:15.674 00:29:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:15.674 00:29:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:15.674 00:29:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:15.932 00:29:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:15.932 00:29:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:15.932 00:29:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:15.932 00:29:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:15.932 00:29:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:15.932 00:29:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:15.932 00:29:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:15.932 00:29:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:15.932 00:29:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:15.932 00:29:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:15.932 00:29:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:15.932 00:29:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:15.932 00:29:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:15.932 00:29:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:15.932 00:29:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:15.932 00:29:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:15.932 00:29:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:15.932 00:29:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:15.932 00:29:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:15.932 00:29:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:15.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:20:15.932 00:20:15.932 --- 10.0.0.2 ping statistics --- 00:20:15.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.932 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:20:15.932 00:29:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:15.932 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:15.932 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:20:15.932 00:20:15.932 --- 10.0.0.3 ping statistics --- 00:20:15.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.932 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:15.932 00:29:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:15.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:20:15.932 00:20:15.933 --- 10.0.0.1 ping statistics --- 00:20:15.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.933 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:15.933 00:29:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.933 00:29:03 -- nvmf/common.sh@421 -- # return 0 00:20:15.933 00:29:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:15.933 00:29:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.933 00:29:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:15.933 00:29:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:15.933 00:29:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.933 00:29:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:15.933 00:29:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:15.933 00:29:03 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:15.933 00:29:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:15.933 00:29:03 -- common/autotest_common.sh@10 -- # set +x 00:20:15.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.933 00:29:03 -- host/identify.sh@19 -- # nvmfpid=92968 00:20:15.933 00:29:03 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.933 00:29:03 -- host/identify.sh@23 -- # waitforlisten 92968 00:20:15.933 00:29:03 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:15.933 00:29:03 -- common/autotest_common.sh@819 -- # '[' -z 92968 ']' 00:20:15.933 00:29:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.933 00:29:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:15.933 00:29:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.933 00:29:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:15.933 00:29:03 -- common/autotest_common.sh@10 -- # set +x 00:20:16.191 [2024-07-13 00:29:03.177242] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:16.191 [2024-07-13 00:29:03.177342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.191 [2024-07-13 00:29:03.322418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.448 [2024-07-13 00:29:03.459333] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:16.448 [2024-07-13 00:29:03.459512] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.448 [2024-07-13 00:29:03.459529] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.448 [2024-07-13 00:29:03.459541] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.448 [2024-07-13 00:29:03.459699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.448 [2024-07-13 00:29:03.460303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.448 [2024-07-13 00:29:03.460456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.448 [2024-07-13 00:29:03.460460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.016 00:29:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:17.016 00:29:04 -- common/autotest_common.sh@852 -- # return 0 00:20:17.016 00:29:04 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:17.016 00:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.016 00:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:17.016 [2024-07-13 00:29:04.173552] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.016 00:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.016 00:29:04 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:17.016 00:29:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:17.016 00:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:17.016 00:29:04 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:17.016 00:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.016 00:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:17.275 Malloc0 00:20:17.275 00:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.275 00:29:04 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:17.275 00:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.275 00:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:17.275 00:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.275 00:29:04 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:17.275 00:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.275 00:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:17.275 00:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.275 00:29:04 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:17.275 00:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.275 00:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:17.275 [2024-07-13 00:29:04.299171] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.275 00:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.275 00:29:04 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:17.275 00:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.275 00:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:17.275 00:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.275 00:29:04 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:17.275 00:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.275 00:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:17.275 [2024-07-13 00:29:04.314909] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:17.275 [ 00:20:17.275 { 00:20:17.275 "allow_any_host": true, 00:20:17.275 "hosts": [], 00:20:17.275 "listen_addresses": [ 00:20:17.275 { 00:20:17.275 "adrfam": "IPv4", 00:20:17.275 "traddr": "10.0.0.2", 00:20:17.275 "transport": "TCP", 00:20:17.275 "trsvcid": "4420", 00:20:17.275 "trtype": "TCP" 00:20:17.275 } 00:20:17.275 ], 00:20:17.275 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:17.275 "subtype": "Discovery" 00:20:17.275 }, 00:20:17.275 { 00:20:17.275 "allow_any_host": true, 00:20:17.275 "hosts": [], 00:20:17.275 "listen_addresses": [ 00:20:17.275 { 00:20:17.275 "adrfam": "IPv4", 00:20:17.275 "traddr": "10.0.0.2", 00:20:17.275 "transport": "TCP", 00:20:17.275 "trsvcid": "4420", 00:20:17.275 "trtype": "TCP" 00:20:17.275 } 00:20:17.275 ], 00:20:17.275 "max_cntlid": 65519, 00:20:17.275 "max_namespaces": 32, 00:20:17.275 "min_cntlid": 1, 00:20:17.275 "model_number": "SPDK bdev Controller", 00:20:17.275 "namespaces": [ 00:20:17.275 { 00:20:17.275 "bdev_name": "Malloc0", 00:20:17.275 "eui64": "ABCDEF0123456789", 00:20:17.275 "name": "Malloc0", 00:20:17.275 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:17.275 "nsid": 1, 00:20:17.275 "uuid": "4d6c8687-e073-4c55-9020-27a18ad8c9dd" 00:20:17.275 } 00:20:17.275 ], 00:20:17.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.275 "serial_number": "SPDK00000000000001", 00:20:17.275 "subtype": "NVMe" 00:20:17.275 } 00:20:17.275 ] 00:20:17.275 00:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.275 00:29:04 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:17.275 [2024-07-13 00:29:04.354669] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:17.275 [2024-07-13 00:29:04.354731] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93023 ] 00:20:17.275 [2024-07-13 00:29:04.502812] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:17.275 [2024-07-13 00:29:04.502910] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:17.275 [2024-07-13 00:29:04.502930] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:17.275 [2024-07-13 00:29:04.502948] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:17.275 [2024-07-13 00:29:04.502969] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:17.275 [2024-07-13 00:29:04.503169] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:17.275 [2024-07-13 00:29:04.503271] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x99cd70 0 00:20:17.537 [2024-07-13 00:29:04.508657] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:17.537 [2024-07-13 00:29:04.508697] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:17.537 [2024-07-13 00:29:04.508716] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:17.537 [2024-07-13 00:29:04.508722] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:17.537 [2024-07-13 00:29:04.508781] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.508790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.508796] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x99cd70) 00:20:17.537 [2024-07-13 00:29:04.508815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:17.537 [2024-07-13 00:29:04.508862] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e65f0, cid 0, qid 0 00:20:17.537 [2024-07-13 00:29:04.516647] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.537 [2024-07-13 00:29:04.516673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.537 [2024-07-13 00:29:04.516692] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.516699] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e65f0) on tqpair=0x99cd70 00:20:17.537 [2024-07-13 00:29:04.516718] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:17.537 [2024-07-13 00:29:04.516728] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:17.537 [2024-07-13 00:29:04.516736] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:17.537 [2024-07-13 00:29:04.516757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.516763] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.516768] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x99cd70) 00:20:17.537 [2024-07-13 00:29:04.516781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.537 [2024-07-13 00:29:04.516818] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e65f0, cid 0, qid 0 00:20:17.537 [2024-07-13 00:29:04.516901] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.537 [2024-07-13 00:29:04.516910] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.537 [2024-07-13 00:29:04.516926] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.516931] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e65f0) on tqpair=0x99cd70 00:20:17.537 [2024-07-13 00:29:04.516941] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:17.537 [2024-07-13 00:29:04.516951] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:17.537 [2024-07-13 00:29:04.516973] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.516991] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.516996] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x99cd70) 00:20:17.537 [2024-07-13 00:29:04.517006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.537 [2024-07-13 00:29:04.517043] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e65f0, cid 0, qid 0 00:20:17.537 [2024-07-13 00:29:04.517115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.537 [2024-07-13 00:29:04.517124] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.537 [2024-07-13 00:29:04.517129] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.517134] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e65f0) on tqpair=0x99cd70 00:20:17.537 [2024-07-13 00:29:04.517142] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:17.537 [2024-07-13 00:29:04.517154] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:17.537 [2024-07-13 00:29:04.517163] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.517168] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.517173] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x99cd70) 00:20:17.537 [2024-07-13 00:29:04.517183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.537 [2024-07-13 00:29:04.517207] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e65f0, cid 0, qid 0 00:20:17.537 [2024-07-13 00:29:04.517260] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.537 [2024-07-13 00:29:04.517269] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.537 [2024-07-13 00:29:04.517274] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.517279] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e65f0) on tqpair=0x99cd70 00:20:17.537 [2024-07-13 00:29:04.517287] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:17.537 [2024-07-13 00:29:04.517300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.517305] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.537 [2024-07-13 00:29:04.517310] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x99cd70) 00:20:17.537 [2024-07-13 00:29:04.517319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.537 [2024-07-13 00:29:04.517342] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e65f0, cid 0, qid 0 00:20:17.537 [2024-07-13 00:29:04.517405] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.538 [2024-07-13 00:29:04.517414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.538 [2024-07-13 00:29:04.517419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.517424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e65f0) on tqpair=0x99cd70 00:20:17.538 [2024-07-13 00:29:04.517431] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:17.538 [2024-07-13 00:29:04.517438] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:17.538 [2024-07-13 00:29:04.517448] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:17.538 [2024-07-13 00:29:04.517555] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:17.538 [2024-07-13 00:29:04.517573] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:17.538 [2024-07-13 00:29:04.517586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.517592] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.517597] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x99cd70) 00:20:17.538 [2024-07-13 00:29:04.517607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.538 [2024-07-13 00:29:04.517654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e65f0, cid 0, qid 0 00:20:17.538 [2024-07-13 00:29:04.517731] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.538 [2024-07-13 00:29:04.517740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.538 [2024-07-13 00:29:04.517745] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.517750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e65f0) on tqpair=0x99cd70 00:20:17.538 [2024-07-13 00:29:04.517757] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:17.538 [2024-07-13 00:29:04.517771] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.517776] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.517781] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x99cd70) 00:20:17.538 [2024-07-13 00:29:04.517792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.538 [2024-07-13 00:29:04.517816] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e65f0, cid 0, qid 0 00:20:17.538 [2024-07-13 00:29:04.517879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.538 [2024-07-13 00:29:04.517893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.538 [2024-07-13 00:29:04.517898] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.517904] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e65f0) on tqpair=0x99cd70 00:20:17.538 [2024-07-13 00:29:04.517911] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:17.538 [2024-07-13 00:29:04.517918] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:17.538 [2024-07-13 00:29:04.517929] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:17.538 [2024-07-13 00:29:04.517949] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:17.538 [2024-07-13 00:29:04.517962] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.517968] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.517973] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x99cd70) 00:20:17.538 [2024-07-13 00:29:04.517983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.538 [2024-07-13 00:29:04.518009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e65f0, cid 0, qid 0 00:20:17.538 [2024-07-13 00:29:04.518132] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.538 [2024-07-13 00:29:04.518150] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.538 [2024-07-13 00:29:04.518157] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518162] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x99cd70): datao=0, datal=4096, cccid=0 00:20:17.538 [2024-07-13 00:29:04.518169] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9e65f0) on tqpair(0x99cd70): expected_datao=0, payload_size=4096 00:20:17.538 [2024-07-13 00:29:04.518181] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518187] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518199] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.538 [2024-07-13 00:29:04.518207] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.538 [2024-07-13 00:29:04.518212] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518217] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e65f0) on tqpair=0x99cd70 00:20:17.538 [2024-07-13 00:29:04.518228] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:17.538 [2024-07-13 00:29:04.518235] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:17.538 [2024-07-13 00:29:04.518241] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:17.538 [2024-07-13 00:29:04.518248] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:17.538 [2024-07-13 00:29:04.518254] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:17.538 [2024-07-13 00:29:04.518261] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:17.538 [2024-07-13 00:29:04.518278] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:17.538 [2024-07-13 00:29:04.518289] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518295] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518300] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x99cd70) 00:20:17.538 [2024-07-13 00:29:04.518310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:17.538 [2024-07-13 00:29:04.518337] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e65f0, cid 0, qid 0 00:20:17.538 [2024-07-13 00:29:04.518404] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.538 [2024-07-13 00:29:04.518422] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.538 [2024-07-13 00:29:04.518429] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518434] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e65f0) on tqpair=0x99cd70 00:20:17.538 [2024-07-13 00:29:04.518444] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518450] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518455] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x99cd70) 00:20:17.538 [2024-07-13 00:29:04.518464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.538 [2024-07-13 00:29:04.518472] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518477] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x99cd70) 00:20:17.538 [2024-07-13 00:29:04.518490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.538 [2024-07-13 00:29:04.518498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518508] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x99cd70) 00:20:17.538 [2024-07-13 00:29:04.518515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.538 [2024-07-13 00:29:04.518523] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518528] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.538 [2024-07-13 00:29:04.518540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.538 [2024-07-13 00:29:04.518548] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:17.538 [2024-07-13 00:29:04.518564] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:17.538 [2024-07-13 00:29:04.518574] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518579] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518584] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x99cd70) 00:20:17.538 [2024-07-13 00:29:04.518594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.538 [2024-07-13 00:29:04.518638] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e65f0, cid 0, qid 0 00:20:17.538 [2024-07-13 00:29:04.518649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6750, cid 1, qid 0 00:20:17.538 [2024-07-13 00:29:04.518656] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e68b0, cid 2, qid 0 00:20:17.538 [2024-07-13 00:29:04.518662] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.538 [2024-07-13 00:29:04.518668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6b70, cid 4, qid 0 00:20:17.538 [2024-07-13 00:29:04.518756] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.538 [2024-07-13 00:29:04.518770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.538 [2024-07-13 00:29:04.518775] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518781] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6b70) on tqpair=0x99cd70 00:20:17.538 [2024-07-13 00:29:04.518788] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:17.538 [2024-07-13 00:29:04.518796] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:17.538 [2024-07-13 00:29:04.518810] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518816] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.538 [2024-07-13 00:29:04.518821] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x99cd70) 00:20:17.538 [2024-07-13 00:29:04.518831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.538 [2024-07-13 00:29:04.518857] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6b70, cid 4, qid 0 00:20:17.538 [2024-07-13 00:29:04.518929] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.539 [2024-07-13 00:29:04.518937] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.539 [2024-07-13 00:29:04.518942] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.518948] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x99cd70): datao=0, datal=4096, cccid=4 00:20:17.539 [2024-07-13 00:29:04.518954] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9e6b70) on tqpair(0x99cd70): expected_datao=0, payload_size=4096 00:20:17.539 [2024-07-13 00:29:04.518964] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.518970] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.518981] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.539 [2024-07-13 00:29:04.518989] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.539 [2024-07-13 00:29:04.518994] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.518999] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6b70) on tqpair=0x99cd70 00:20:17.539 [2024-07-13 00:29:04.519016] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:17.539 [2024-07-13 00:29:04.519081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.519092] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.519098] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x99cd70) 00:20:17.539 [2024-07-13 00:29:04.519108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.539 [2024-07-13 00:29:04.519119] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.519124] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.519129] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x99cd70) 00:20:17.539 [2024-07-13 00:29:04.519137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.539 [2024-07-13 00:29:04.519176] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6b70, cid 4, qid 0 00:20:17.539 [2024-07-13 00:29:04.519185] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6cd0, cid 5, qid 0 00:20:17.539 [2024-07-13 00:29:04.519346] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.539 [2024-07-13 00:29:04.519369] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.539 [2024-07-13 00:29:04.519375] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.519380] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x99cd70): datao=0, datal=1024, cccid=4 00:20:17.539 [2024-07-13 00:29:04.519387] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9e6b70) on tqpair(0x99cd70): expected_datao=0, payload_size=1024 00:20:17.539 [2024-07-13 00:29:04.519397] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.519402] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.519410] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.539 [2024-07-13 00:29:04.519418] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.539 [2024-07-13 00:29:04.519423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.519428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6cd0) on tqpair=0x99cd70 00:20:17.539 [2024-07-13 00:29:04.559722] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.539 [2024-07-13 00:29:04.559744] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.539 [2024-07-13 00:29:04.559765] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.559769] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6b70) on tqpair=0x99cd70 00:20:17.539 [2024-07-13 00:29:04.559784] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.559789] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.559792] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x99cd70) 00:20:17.539 [2024-07-13 00:29:04.559801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.539 [2024-07-13 00:29:04.559834] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6b70, cid 4, qid 0 00:20:17.539 [2024-07-13 00:29:04.559978] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.539 [2024-07-13 00:29:04.559984] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.539 [2024-07-13 00:29:04.559988] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.559992] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x99cd70): datao=0, datal=3072, cccid=4 00:20:17.539 [2024-07-13 00:29:04.560013] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9e6b70) on tqpair(0x99cd70): expected_datao=0, payload_size=3072 00:20:17.539 [2024-07-13 00:29:04.560021] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.560042] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.560051] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.539 [2024-07-13 00:29:04.560057] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.539 [2024-07-13 00:29:04.560061] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.560065] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6b70) on tqpair=0x99cd70 00:20:17.539 [2024-07-13 00:29:04.560076] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.560081] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.560085] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x99cd70) 00:20:17.539 [2024-07-13 00:29:04.560092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.539 [2024-07-13 00:29:04.560119] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6b70, cid 4, qid 0 00:20:17.539 [2024-07-13 00:29:04.560194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.539 [2024-07-13 00:29:04.560201] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.539 [2024-07-13 00:29:04.560206] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.560210] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x99cd70): datao=0, datal=8, cccid=4 00:20:17.539 [2024-07-13 00:29:04.560215] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9e6b70) on tqpair(0x99cd70): expected_datao=0, payload_size=8 00:20:17.539 [2024-07-13 00:29:04.560223] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.560227] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.604656] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.539 [2024-07-13 00:29:04.604680] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.539 [2024-07-13 00:29:04.604701] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.539 [2024-07-13 00:29:04.604705] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6b70) on tqpair=0x99cd70 00:20:17.539 ===================================================== 00:20:17.539 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:17.539 ===================================================== 00:20:17.539 Controller Capabilities/Features 00:20:17.539 ================================ 00:20:17.539 Vendor ID: 0000 00:20:17.539 Subsystem Vendor ID: 0000 00:20:17.539 Serial Number: .................... 00:20:17.539 Model Number: ........................................ 00:20:17.539 Firmware Version: 24.01.1 00:20:17.539 Recommended Arb Burst: 0 00:20:17.539 IEEE OUI Identifier: 00 00 00 00:20:17.539 Multi-path I/O 00:20:17.539 May have multiple subsystem ports: No 00:20:17.539 May have multiple controllers: No 00:20:17.539 Associated with SR-IOV VF: No 00:20:17.539 Max Data Transfer Size: 131072 00:20:17.539 Max Number of Namespaces: 0 00:20:17.539 Max Number of I/O Queues: 1024 00:20:17.539 NVMe Specification Version (VS): 1.3 00:20:17.539 NVMe Specification Version (Identify): 1.3 00:20:17.539 Maximum Queue Entries: 128 00:20:17.539 Contiguous Queues Required: Yes 00:20:17.539 Arbitration Mechanisms Supported 00:20:17.539 Weighted Round Robin: Not Supported 00:20:17.539 Vendor Specific: Not Supported 00:20:17.539 Reset Timeout: 15000 ms 00:20:17.539 Doorbell Stride: 4 bytes 00:20:17.539 NVM Subsystem Reset: Not Supported 00:20:17.539 Command Sets Supported 00:20:17.539 NVM Command Set: Supported 00:20:17.539 Boot Partition: Not Supported 00:20:17.539 Memory Page Size Minimum: 4096 bytes 00:20:17.539 Memory Page Size Maximum: 4096 bytes 00:20:17.539 Persistent Memory Region: Not Supported 00:20:17.539 Optional Asynchronous Events Supported 00:20:17.539 Namespace Attribute Notices: Not Supported 00:20:17.539 Firmware Activation Notices: Not Supported 00:20:17.539 ANA Change Notices: Not Supported 00:20:17.539 PLE Aggregate Log Change Notices: Not Supported 00:20:17.539 LBA Status Info Alert Notices: Not Supported 00:20:17.539 EGE Aggregate Log Change Notices: Not Supported 00:20:17.539 Normal NVM Subsystem Shutdown event: Not Supported 00:20:17.539 Zone Descriptor Change Notices: Not Supported 00:20:17.539 Discovery Log Change Notices: Supported 00:20:17.539 Controller Attributes 00:20:17.539 128-bit Host Identifier: Not Supported 00:20:17.539 Non-Operational Permissive Mode: Not Supported 00:20:17.539 NVM Sets: Not Supported 00:20:17.539 Read Recovery Levels: Not Supported 00:20:17.539 Endurance Groups: Not Supported 00:20:17.539 Predictable Latency Mode: Not Supported 00:20:17.539 Traffic Based Keep ALive: Not Supported 00:20:17.539 Namespace Granularity: Not Supported 00:20:17.539 SQ Associations: Not Supported 00:20:17.539 UUID List: Not Supported 00:20:17.539 Multi-Domain Subsystem: Not Supported 00:20:17.539 Fixed Capacity Management: Not Supported 00:20:17.539 Variable Capacity Management: Not Supported 00:20:17.539 Delete Endurance Group: Not Supported 00:20:17.539 Delete NVM Set: Not Supported 00:20:17.539 Extended LBA Formats Supported: Not Supported 00:20:17.539 Flexible Data Placement Supported: Not Supported 00:20:17.539 00:20:17.539 Controller Memory Buffer Support 00:20:17.539 ================================ 00:20:17.539 Supported: No 00:20:17.539 00:20:17.539 Persistent Memory Region Support 00:20:17.539 ================================ 00:20:17.539 Supported: No 00:20:17.539 00:20:17.539 Admin Command Set Attributes 00:20:17.540 ============================ 00:20:17.540 Security Send/Receive: Not Supported 00:20:17.540 Format NVM: Not Supported 00:20:17.540 Firmware Activate/Download: Not Supported 00:20:17.540 Namespace Management: Not Supported 00:20:17.540 Device Self-Test: Not Supported 00:20:17.540 Directives: Not Supported 00:20:17.540 NVMe-MI: Not Supported 00:20:17.540 Virtualization Management: Not Supported 00:20:17.540 Doorbell Buffer Config: Not Supported 00:20:17.540 Get LBA Status Capability: Not Supported 00:20:17.540 Command & Feature Lockdown Capability: Not Supported 00:20:17.540 Abort Command Limit: 1 00:20:17.540 Async Event Request Limit: 4 00:20:17.540 Number of Firmware Slots: N/A 00:20:17.540 Firmware Slot 1 Read-Only: N/A 00:20:17.540 Firmware Activation Without Reset: N/A 00:20:17.540 Multiple Update Detection Support: N/A 00:20:17.540 Firmware Update Granularity: No Information Provided 00:20:17.540 Per-Namespace SMART Log: No 00:20:17.540 Asymmetric Namespace Access Log Page: Not Supported 00:20:17.540 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:17.540 Command Effects Log Page: Not Supported 00:20:17.540 Get Log Page Extended Data: Supported 00:20:17.540 Telemetry Log Pages: Not Supported 00:20:17.540 Persistent Event Log Pages: Not Supported 00:20:17.540 Supported Log Pages Log Page: May Support 00:20:17.540 Commands Supported & Effects Log Page: Not Supported 00:20:17.540 Feature Identifiers & Effects Log Page:May Support 00:20:17.540 NVMe-MI Commands & Effects Log Page: May Support 00:20:17.540 Data Area 4 for Telemetry Log: Not Supported 00:20:17.540 Error Log Page Entries Supported: 128 00:20:17.540 Keep Alive: Not Supported 00:20:17.540 00:20:17.540 NVM Command Set Attributes 00:20:17.540 ========================== 00:20:17.540 Submission Queue Entry Size 00:20:17.540 Max: 1 00:20:17.540 Min: 1 00:20:17.540 Completion Queue Entry Size 00:20:17.540 Max: 1 00:20:17.540 Min: 1 00:20:17.540 Number of Namespaces: 0 00:20:17.540 Compare Command: Not Supported 00:20:17.540 Write Uncorrectable Command: Not Supported 00:20:17.540 Dataset Management Command: Not Supported 00:20:17.540 Write Zeroes Command: Not Supported 00:20:17.540 Set Features Save Field: Not Supported 00:20:17.540 Reservations: Not Supported 00:20:17.540 Timestamp: Not Supported 00:20:17.540 Copy: Not Supported 00:20:17.540 Volatile Write Cache: Not Present 00:20:17.540 Atomic Write Unit (Normal): 1 00:20:17.540 Atomic Write Unit (PFail): 1 00:20:17.540 Atomic Compare & Write Unit: 1 00:20:17.540 Fused Compare & Write: Supported 00:20:17.540 Scatter-Gather List 00:20:17.540 SGL Command Set: Supported 00:20:17.540 SGL Keyed: Supported 00:20:17.540 SGL Bit Bucket Descriptor: Not Supported 00:20:17.540 SGL Metadata Pointer: Not Supported 00:20:17.540 Oversized SGL: Not Supported 00:20:17.540 SGL Metadata Address: Not Supported 00:20:17.540 SGL Offset: Supported 00:20:17.540 Transport SGL Data Block: Not Supported 00:20:17.540 Replay Protected Memory Block: Not Supported 00:20:17.540 00:20:17.540 Firmware Slot Information 00:20:17.540 ========================= 00:20:17.540 Active slot: 0 00:20:17.540 00:20:17.540 00:20:17.540 Error Log 00:20:17.540 ========= 00:20:17.540 00:20:17.540 Active Namespaces 00:20:17.540 ================= 00:20:17.540 Discovery Log Page 00:20:17.540 ================== 00:20:17.540 Generation Counter: 2 00:20:17.540 Number of Records: 2 00:20:17.540 Record Format: 0 00:20:17.540 00:20:17.540 Discovery Log Entry 0 00:20:17.540 ---------------------- 00:20:17.540 Transport Type: 3 (TCP) 00:20:17.540 Address Family: 1 (IPv4) 00:20:17.540 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:17.540 Entry Flags: 00:20:17.540 Duplicate Returned Information: 1 00:20:17.540 Explicit Persistent Connection Support for Discovery: 1 00:20:17.540 Transport Requirements: 00:20:17.540 Secure Channel: Not Required 00:20:17.540 Port ID: 0 (0x0000) 00:20:17.540 Controller ID: 65535 (0xffff) 00:20:17.540 Admin Max SQ Size: 128 00:20:17.540 Transport Service Identifier: 4420 00:20:17.540 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:17.540 Transport Address: 10.0.0.2 00:20:17.540 Discovery Log Entry 1 00:20:17.540 ---------------------- 00:20:17.540 Transport Type: 3 (TCP) 00:20:17.540 Address Family: 1 (IPv4) 00:20:17.540 Subsystem Type: 2 (NVM Subsystem) 00:20:17.540 Entry Flags: 00:20:17.540 Duplicate Returned Information: 0 00:20:17.540 Explicit Persistent Connection Support for Discovery: 0 00:20:17.540 Transport Requirements: 00:20:17.540 Secure Channel: Not Required 00:20:17.540 Port ID: 0 (0x0000) 00:20:17.540 Controller ID: 65535 (0xffff) 00:20:17.540 Admin Max SQ Size: 128 00:20:17.540 Transport Service Identifier: 4420 00:20:17.540 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:17.540 Transport Address: 10.0.0.2 [2024-07-13 00:29:04.604841] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:17.540 [2024-07-13 00:29:04.604862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.540 [2024-07-13 00:29:04.604871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.540 [2024-07-13 00:29:04.604877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.540 [2024-07-13 00:29:04.604882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.540 [2024-07-13 00:29:04.604893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.540 [2024-07-13 00:29:04.604897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.540 [2024-07-13 00:29:04.604901] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.540 [2024-07-13 00:29:04.604909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.540 [2024-07-13 00:29:04.604953] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.540 [2024-07-13 00:29:04.605129] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.540 [2024-07-13 00:29:04.605137] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.540 [2024-07-13 00:29:04.605141] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.540 [2024-07-13 00:29:04.605145] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.540 [2024-07-13 00:29:04.605154] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.540 [2024-07-13 00:29:04.605158] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.540 [2024-07-13 00:29:04.605162] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.540 [2024-07-13 00:29:04.605169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.540 [2024-07-13 00:29:04.605195] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.540 [2024-07-13 00:29:04.605294] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.540 [2024-07-13 00:29:04.605318] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.540 [2024-07-13 00:29:04.605323] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.540 [2024-07-13 00:29:04.605327] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.540 [2024-07-13 00:29:04.605334] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:17.540 [2024-07-13 00:29:04.605339] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:17.540 [2024-07-13 00:29:04.605350] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.540 [2024-07-13 00:29:04.605355] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.540 [2024-07-13 00:29:04.605359] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.540 [2024-07-13 00:29:04.605367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.540 [2024-07-13 00:29:04.605388] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.540 [2024-07-13 00:29:04.605444] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.540 [2024-07-13 00:29:04.605451] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.540 [2024-07-13 00:29:04.605455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.540 [2024-07-13 00:29:04.605459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.540 [2024-07-13 00:29:04.605471] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.540 [2024-07-13 00:29:04.605475] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.540 [2024-07-13 00:29:04.605479] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.540 [2024-07-13 00:29:04.605487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.540 [2024-07-13 00:29:04.605506] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.540 [2024-07-13 00:29:04.605562] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.605569] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.541 [2024-07-13 00:29:04.605572] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.605576] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.541 [2024-07-13 00:29:04.605587] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.605592] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.605596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.541 [2024-07-13 00:29:04.605603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.541 [2024-07-13 00:29:04.605649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.541 [2024-07-13 00:29:04.605716] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.605731] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.541 [2024-07-13 00:29:04.605736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.605740] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.541 [2024-07-13 00:29:04.605752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.605757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.605761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.541 [2024-07-13 00:29:04.605769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.541 [2024-07-13 00:29:04.605790] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.541 [2024-07-13 00:29:04.605848] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.605855] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.541 [2024-07-13 00:29:04.605858] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.605863] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.541 [2024-07-13 00:29:04.605873] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.605878] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.605882] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.541 [2024-07-13 00:29:04.605889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.541 [2024-07-13 00:29:04.605908] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.541 [2024-07-13 00:29:04.605963] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.605975] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.541 [2024-07-13 00:29:04.605979] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.605983] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.541 [2024-07-13 00:29:04.605995] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.605999] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606003] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.541 [2024-07-13 00:29:04.606011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.541 [2024-07-13 00:29:04.606031] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.541 [2024-07-13 00:29:04.606101] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.606108] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.541 [2024-07-13 00:29:04.606111] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606116] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.541 [2024-07-13 00:29:04.606126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606135] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.541 [2024-07-13 00:29:04.606142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.541 [2024-07-13 00:29:04.606161] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.541 [2024-07-13 00:29:04.606217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.606228] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.541 [2024-07-13 00:29:04.606232] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606237] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.541 [2024-07-13 00:29:04.606248] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606253] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606257] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.541 [2024-07-13 00:29:04.606264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.541 [2024-07-13 00:29:04.606284] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.541 [2024-07-13 00:29:04.606337] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.606344] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.541 [2024-07-13 00:29:04.606348] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606352] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.541 [2024-07-13 00:29:04.606363] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606367] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606371] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.541 [2024-07-13 00:29:04.606379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.541 [2024-07-13 00:29:04.606398] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.541 [2024-07-13 00:29:04.606459] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.606465] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.541 [2024-07-13 00:29:04.606469] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606473] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.541 [2024-07-13 00:29:04.606484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606488] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606492] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.541 [2024-07-13 00:29:04.606500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.541 [2024-07-13 00:29:04.606519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.541 [2024-07-13 00:29:04.606575] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.606583] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.541 [2024-07-13 00:29:04.606587] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606591] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.541 [2024-07-13 00:29:04.606602] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606606] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606610] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.541 [2024-07-13 00:29:04.606630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.541 [2024-07-13 00:29:04.606652] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.541 [2024-07-13 00:29:04.606712] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.606718] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.541 [2024-07-13 00:29:04.606722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606726] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.541 [2024-07-13 00:29:04.606738] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606742] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.541 [2024-07-13 00:29:04.606754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.541 [2024-07-13 00:29:04.606773] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.541 [2024-07-13 00:29:04.606825] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.606831] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.541 [2024-07-13 00:29:04.606835] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606839] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.541 [2024-07-13 00:29:04.606850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606855] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.541 [2024-07-13 00:29:04.606859] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.541 [2024-07-13 00:29:04.606866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.541 [2024-07-13 00:29:04.606885] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.541 [2024-07-13 00:29:04.606935] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.541 [2024-07-13 00:29:04.606941] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.542 [2024-07-13 00:29:04.606945] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.542 [2024-07-13 00:29:04.606949] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.542 [2024-07-13 00:29:04.606960] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.542 [2024-07-13 00:29:04.606965] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.542 [2024-07-13 00:29:04.606969] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.542 [2024-07-13 00:29:04.606976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.542 [2024-07-13 00:29:04.606995] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.542 [2024-07-13 00:29:04.607050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.542 [2024-07-13 00:29:04.607057] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.542 [2024-07-13 00:29:04.607060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.542 [2024-07-13 00:29:04.607065] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.542 [2024-07-13 00:29:04.607075] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.542 [2024-07-13 00:29:04.607080] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.542 [2024-07-13 00:29:04.607084] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.542 [2024-07-13 00:29:04.607091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.542 [2024-07-13 00:29:04.607110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.542 [2024-07-13 00:29:04.607165] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.542 [2024-07-13 00:29:04.607171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.607175] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607179] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.607190] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607195] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607199] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.543 [2024-07-13 00:29:04.607206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.543 [2024-07-13 00:29:04.607225] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.543 [2024-07-13 00:29:04.607279] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.543 [2024-07-13 00:29:04.607286] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.607290] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607294] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.607305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607310] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607314] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.543 [2024-07-13 00:29:04.607321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.543 [2024-07-13 00:29:04.607340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.543 [2024-07-13 00:29:04.607395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.543 [2024-07-13 00:29:04.607402] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.607406] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607410] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.607420] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607429] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.543 [2024-07-13 00:29:04.607436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.543 [2024-07-13 00:29:04.607456] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.543 [2024-07-13 00:29:04.607506] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.543 [2024-07-13 00:29:04.607513] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.607517] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607521] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.607531] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607540] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.543 [2024-07-13 00:29:04.607547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.543 [2024-07-13 00:29:04.607566] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.543 [2024-07-13 00:29:04.607632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.543 [2024-07-13 00:29:04.607640] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.607644] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.607660] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607664] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607668] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.543 [2024-07-13 00:29:04.607676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.543 [2024-07-13 00:29:04.607697] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.543 [2024-07-13 00:29:04.607755] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.543 [2024-07-13 00:29:04.607762] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.607766] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607770] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.607781] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607785] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607789] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.543 [2024-07-13 00:29:04.607797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.543 [2024-07-13 00:29:04.607816] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.543 [2024-07-13 00:29:04.607868] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.543 [2024-07-13 00:29:04.607875] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.607878] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607882] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.607893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607898] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.607902] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.543 [2024-07-13 00:29:04.607909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.543 [2024-07-13 00:29:04.607928] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.543 [2024-07-13 00:29:04.607987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.543 [2024-07-13 00:29:04.607994] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.607997] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608002] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.608012] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608017] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608021] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.543 [2024-07-13 00:29:04.608028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.543 [2024-07-13 00:29:04.608047] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.543 [2024-07-13 00:29:04.608099] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.543 [2024-07-13 00:29:04.608106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.608109] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608114] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.608124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608129] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608133] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.543 [2024-07-13 00:29:04.608140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.543 [2024-07-13 00:29:04.608159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.543 [2024-07-13 00:29:04.608221] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.543 [2024-07-13 00:29:04.608227] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.608231] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608235] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.608246] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608251] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608254] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.543 [2024-07-13 00:29:04.608262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.543 [2024-07-13 00:29:04.608281] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.543 [2024-07-13 00:29:04.608334] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.543 [2024-07-13 00:29:04.608341] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.608344] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.608359] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.543 [2024-07-13 00:29:04.608375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.543 [2024-07-13 00:29:04.608394] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.543 [2024-07-13 00:29:04.608450] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.543 [2024-07-13 00:29:04.608461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.543 [2024-07-13 00:29:04.608465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608470] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.543 [2024-07-13 00:29:04.608481] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608486] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.543 [2024-07-13 00:29:04.608490] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.544 [2024-07-13 00:29:04.608497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.544 [2024-07-13 00:29:04.608517] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.544 [2024-07-13 00:29:04.608581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.544 [2024-07-13 00:29:04.608592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.544 [2024-07-13 00:29:04.608605] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.544 [2024-07-13 00:29:04.608609] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.544 [2024-07-13 00:29:04.612645] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.544 [2024-07-13 00:29:04.612665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.544 [2024-07-13 00:29:04.612670] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x99cd70) 00:20:17.544 [2024-07-13 00:29:04.612695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.544 [2024-07-13 00:29:04.612725] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9e6a10, cid 3, qid 0 00:20:17.544 [2024-07-13 00:29:04.612786] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.544 [2024-07-13 00:29:04.612793] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.544 [2024-07-13 00:29:04.612797] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.544 [2024-07-13 00:29:04.612801] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9e6a10) on tqpair=0x99cd70 00:20:17.544 [2024-07-13 00:29:04.612810] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:17.544 00:20:17.544 00:29:04 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:17.544 [2024-07-13 00:29:04.648974] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:17.544 [2024-07-13 00:29:04.649042] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93025 ] 00:20:17.806 [2024-07-13 00:29:04.790552] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:17.806 [2024-07-13 00:29:04.790649] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:17.806 [2024-07-13 00:29:04.790657] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:17.806 [2024-07-13 00:29:04.790671] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:17.806 [2024-07-13 00:29:04.790682] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:17.806 [2024-07-13 00:29:04.790824] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:17.806 [2024-07-13 00:29:04.790909] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb5bd70 0 00:20:17.806 [2024-07-13 00:29:04.796647] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:17.806 [2024-07-13 00:29:04.796672] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:17.806 [2024-07-13 00:29:04.796694] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:17.806 [2024-07-13 00:29:04.796698] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:17.806 [2024-07-13 00:29:04.796744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.806 [2024-07-13 00:29:04.796751] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.806 [2024-07-13 00:29:04.796755] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5bd70) 00:20:17.806 [2024-07-13 00:29:04.796769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:17.806 [2024-07-13 00:29:04.796800] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba55f0, cid 0, qid 0 00:20:17.806 [2024-07-13 00:29:04.803682] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.806 [2024-07-13 00:29:04.803700] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.806 [2024-07-13 00:29:04.803705] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.806 [2024-07-13 00:29:04.803709] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba55f0) on tqpair=0xb5bd70 00:20:17.806 [2024-07-13 00:29:04.803722] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:17.806 [2024-07-13 00:29:04.803729] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:17.806 [2024-07-13 00:29:04.803735] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:17.806 [2024-07-13 00:29:04.803751] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.806 [2024-07-13 00:29:04.803770] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.806 [2024-07-13 00:29:04.803774] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5bd70) 00:20:17.806 [2024-07-13 00:29:04.803784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.806 [2024-07-13 00:29:04.803817] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba55f0, cid 0, qid 0 00:20:17.806 [2024-07-13 00:29:04.803890] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.806 [2024-07-13 00:29:04.803897] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.806 [2024-07-13 00:29:04.803901] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.806 [2024-07-13 00:29:04.803904] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba55f0) on tqpair=0xb5bd70 00:20:17.806 [2024-07-13 00:29:04.803926] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:17.806 [2024-07-13 00:29:04.803934] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:17.806 [2024-07-13 00:29:04.803958] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.806 [2024-07-13 00:29:04.803979] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.806 [2024-07-13 00:29:04.803983] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5bd70) 00:20:17.806 [2024-07-13 00:29:04.803991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.806 [2024-07-13 00:29:04.804013] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba55f0, cid 0, qid 0 00:20:17.806 [2024-07-13 00:29:04.804090] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.806 [2024-07-13 00:29:04.804097] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.806 [2024-07-13 00:29:04.804101] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.806 [2024-07-13 00:29:04.804106] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba55f0) on tqpair=0xb5bd70 00:20:17.806 [2024-07-13 00:29:04.804112] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:17.806 [2024-07-13 00:29:04.804121] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:17.806 [2024-07-13 00:29:04.804129] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804133] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5bd70) 00:20:17.807 [2024-07-13 00:29:04.804145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.807 [2024-07-13 00:29:04.804165] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba55f0, cid 0, qid 0 00:20:17.807 [2024-07-13 00:29:04.804224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.807 [2024-07-13 00:29:04.804231] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.807 [2024-07-13 00:29:04.804235] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804239] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba55f0) on tqpair=0xb5bd70 00:20:17.807 [2024-07-13 00:29:04.804245] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:17.807 [2024-07-13 00:29:04.804256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5bd70) 00:20:17.807 [2024-07-13 00:29:04.804272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.807 [2024-07-13 00:29:04.804293] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba55f0, cid 0, qid 0 00:20:17.807 [2024-07-13 00:29:04.804343] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.807 [2024-07-13 00:29:04.804350] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.807 [2024-07-13 00:29:04.804354] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804358] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba55f0) on tqpair=0xb5bd70 00:20:17.807 [2024-07-13 00:29:04.804364] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:17.807 [2024-07-13 00:29:04.804369] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:17.807 [2024-07-13 00:29:04.804378] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:17.807 [2024-07-13 00:29:04.804483] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:17.807 [2024-07-13 00:29:04.804488] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:17.807 [2024-07-13 00:29:04.804498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804507] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5bd70) 00:20:17.807 [2024-07-13 00:29:04.804515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.807 [2024-07-13 00:29:04.804535] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba55f0, cid 0, qid 0 00:20:17.807 [2024-07-13 00:29:04.804590] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.807 [2024-07-13 00:29:04.804608] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.807 [2024-07-13 00:29:04.804612] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba55f0) on tqpair=0xb5bd70 00:20:17.807 [2024-07-13 00:29:04.804641] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:17.807 [2024-07-13 00:29:04.804653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804658] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804661] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5bd70) 00:20:17.807 [2024-07-13 00:29:04.804669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.807 [2024-07-13 00:29:04.804692] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba55f0, cid 0, qid 0 00:20:17.807 [2024-07-13 00:29:04.804758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.807 [2024-07-13 00:29:04.804765] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.807 [2024-07-13 00:29:04.804769] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804773] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba55f0) on tqpair=0xb5bd70 00:20:17.807 [2024-07-13 00:29:04.804779] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:17.807 [2024-07-13 00:29:04.804784] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:17.807 [2024-07-13 00:29:04.804792] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:17.807 [2024-07-13 00:29:04.804810] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:17.807 [2024-07-13 00:29:04.804821] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804825] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804829] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5bd70) 00:20:17.807 [2024-07-13 00:29:04.804837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.807 [2024-07-13 00:29:04.804859] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba55f0, cid 0, qid 0 00:20:17.807 [2024-07-13 00:29:04.804972] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.807 [2024-07-13 00:29:04.804980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.807 [2024-07-13 00:29:04.804984] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.804988] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5bd70): datao=0, datal=4096, cccid=0 00:20:17.807 [2024-07-13 00:29:04.804993] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xba55f0) on tqpair(0xb5bd70): expected_datao=0, payload_size=4096 00:20:17.807 [2024-07-13 00:29:04.805002] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805007] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805016] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.807 [2024-07-13 00:29:04.805023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.807 [2024-07-13 00:29:04.805026] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805030] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba55f0) on tqpair=0xb5bd70 00:20:17.807 [2024-07-13 00:29:04.805039] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:17.807 [2024-07-13 00:29:04.805045] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:17.807 [2024-07-13 00:29:04.805049] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:17.807 [2024-07-13 00:29:04.805054] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:17.807 [2024-07-13 00:29:04.805059] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:17.807 [2024-07-13 00:29:04.805064] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:17.807 [2024-07-13 00:29:04.805079] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:17.807 [2024-07-13 00:29:04.805097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805102] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5bd70) 00:20:17.807 [2024-07-13 00:29:04.805114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:17.807 [2024-07-13 00:29:04.805136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba55f0, cid 0, qid 0 00:20:17.807 [2024-07-13 00:29:04.805203] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.807 [2024-07-13 00:29:04.805210] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.807 [2024-07-13 00:29:04.805214] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805218] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba55f0) on tqpair=0xb5bd70 00:20:17.807 [2024-07-13 00:29:04.805226] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805231] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805235] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb5bd70) 00:20:17.807 [2024-07-13 00:29:04.805241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.807 [2024-07-13 00:29:04.805248] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805252] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805256] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb5bd70) 00:20:17.807 [2024-07-13 00:29:04.805262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.807 [2024-07-13 00:29:04.805268] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805272] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805276] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb5bd70) 00:20:17.807 [2024-07-13 00:29:04.805282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.807 [2024-07-13 00:29:04.805288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.807 [2024-07-13 00:29:04.805296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.807 [2024-07-13 00:29:04.805301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.807 [2024-07-13 00:29:04.805307] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.805320] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.805328] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805332] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805336] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5bd70) 00:20:17.808 [2024-07-13 00:29:04.805343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.808 [2024-07-13 00:29:04.805366] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba55f0, cid 0, qid 0 00:20:17.808 [2024-07-13 00:29:04.805374] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5750, cid 1, qid 0 00:20:17.808 [2024-07-13 00:29:04.805379] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba58b0, cid 2, qid 0 00:20:17.808 [2024-07-13 00:29:04.805384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.808 [2024-07-13 00:29:04.805389] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5b70, cid 4, qid 0 00:20:17.808 [2024-07-13 00:29:04.805479] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.808 [2024-07-13 00:29:04.805486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.808 [2024-07-13 00:29:04.805490] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805494] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5b70) on tqpair=0xb5bd70 00:20:17.808 [2024-07-13 00:29:04.805500] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:17.808 [2024-07-13 00:29:04.805506] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.805515] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.805526] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.805534] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805542] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5bd70) 00:20:17.808 [2024-07-13 00:29:04.805549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:17.808 [2024-07-13 00:29:04.805570] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5b70, cid 4, qid 0 00:20:17.808 [2024-07-13 00:29:04.805633] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.808 [2024-07-13 00:29:04.805642] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.808 [2024-07-13 00:29:04.805646] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805650] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5b70) on tqpair=0xb5bd70 00:20:17.808 [2024-07-13 00:29:04.805714] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.805740] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.805749] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805758] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5bd70) 00:20:17.808 [2024-07-13 00:29:04.805766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.808 [2024-07-13 00:29:04.805790] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5b70, cid 4, qid 0 00:20:17.808 [2024-07-13 00:29:04.805868] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.808 [2024-07-13 00:29:04.805875] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.808 [2024-07-13 00:29:04.805879] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805884] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5bd70): datao=0, datal=4096, cccid=4 00:20:17.808 [2024-07-13 00:29:04.805889] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xba5b70) on tqpair(0xb5bd70): expected_datao=0, payload_size=4096 00:20:17.808 [2024-07-13 00:29:04.805897] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805902] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805911] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.808 [2024-07-13 00:29:04.805917] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.808 [2024-07-13 00:29:04.805921] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805925] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5b70) on tqpair=0xb5bd70 00:20:17.808 [2024-07-13 00:29:04.805943] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:17.808 [2024-07-13 00:29:04.805955] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.805966] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.805974] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805978] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.805982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5bd70) 00:20:17.808 [2024-07-13 00:29:04.805989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.808 [2024-07-13 00:29:04.806012] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5b70, cid 4, qid 0 00:20:17.808 [2024-07-13 00:29:04.806093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.808 [2024-07-13 00:29:04.806100] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.808 [2024-07-13 00:29:04.806104] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806108] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5bd70): datao=0, datal=4096, cccid=4 00:20:17.808 [2024-07-13 00:29:04.806113] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xba5b70) on tqpair(0xb5bd70): expected_datao=0, payload_size=4096 00:20:17.808 [2024-07-13 00:29:04.806122] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806126] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806135] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.808 [2024-07-13 00:29:04.806141] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.808 [2024-07-13 00:29:04.806145] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806149] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5b70) on tqpair=0xb5bd70 00:20:17.808 [2024-07-13 00:29:04.806167] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.806179] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.806187] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806192] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806196] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5bd70) 00:20:17.808 [2024-07-13 00:29:04.806203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.808 [2024-07-13 00:29:04.806224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5b70, cid 4, qid 0 00:20:17.808 [2024-07-13 00:29:04.806303] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.808 [2024-07-13 00:29:04.806310] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.808 [2024-07-13 00:29:04.806314] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806317] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5bd70): datao=0, datal=4096, cccid=4 00:20:17.808 [2024-07-13 00:29:04.806323] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xba5b70) on tqpair(0xb5bd70): expected_datao=0, payload_size=4096 00:20:17.808 [2024-07-13 00:29:04.806331] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806335] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806344] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.808 [2024-07-13 00:29:04.806351] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.808 [2024-07-13 00:29:04.806354] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806359] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5b70) on tqpair=0xb5bd70 00:20:17.808 [2024-07-13 00:29:04.806368] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.806377] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.806389] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.806396] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.806402] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.806408] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:17.808 [2024-07-13 00:29:04.806413] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:17.808 [2024-07-13 00:29:04.806419] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:17.808 [2024-07-13 00:29:04.806448] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806455] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806459] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5bd70) 00:20:17.808 [2024-07-13 00:29:04.806466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.808 [2024-07-13 00:29:04.806474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.808 [2024-07-13 00:29:04.806478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.806482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb5bd70) 00:20:17.809 [2024-07-13 00:29:04.806488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.809 [2024-07-13 00:29:04.806517] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5b70, cid 4, qid 0 00:20:17.809 [2024-07-13 00:29:04.806525] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5cd0, cid 5, qid 0 00:20:17.809 [2024-07-13 00:29:04.806598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.809 [2024-07-13 00:29:04.806605] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.809 [2024-07-13 00:29:04.806609] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.806628] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5b70) on tqpair=0xb5bd70 00:20:17.809 [2024-07-13 00:29:04.806637] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.809 [2024-07-13 00:29:04.806644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.809 [2024-07-13 00:29:04.806647] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.806651] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5cd0) on tqpair=0xb5bd70 00:20:17.809 [2024-07-13 00:29:04.806664] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.806669] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.806673] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb5bd70) 00:20:17.809 [2024-07-13 00:29:04.806680] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.809 [2024-07-13 00:29:04.806705] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5cd0, cid 5, qid 0 00:20:17.809 [2024-07-13 00:29:04.806772] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.809 [2024-07-13 00:29:04.806779] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.809 [2024-07-13 00:29:04.806783] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.806787] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5cd0) on tqpair=0xb5bd70 00:20:17.809 [2024-07-13 00:29:04.806799] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.806803] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.806807] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb5bd70) 00:20:17.809 [2024-07-13 00:29:04.806814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.809 [2024-07-13 00:29:04.806835] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5cd0, cid 5, qid 0 00:20:17.809 [2024-07-13 00:29:04.806936] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.809 [2024-07-13 00:29:04.806947] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.809 [2024-07-13 00:29:04.806951] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.806955] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5cd0) on tqpair=0xb5bd70 00:20:17.809 [2024-07-13 00:29:04.806967] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.806972] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.806976] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb5bd70) 00:20:17.809 [2024-07-13 00:29:04.806984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.809 [2024-07-13 00:29:04.807009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5cd0, cid 5, qid 0 00:20:17.809 [2024-07-13 00:29:04.807070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.809 [2024-07-13 00:29:04.807084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.809 [2024-07-13 00:29:04.807088] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807093] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5cd0) on tqpair=0xb5bd70 00:20:17.809 [2024-07-13 00:29:04.807108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807113] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb5bd70) 00:20:17.809 [2024-07-13 00:29:04.807125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.809 [2024-07-13 00:29:04.807133] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807137] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb5bd70) 00:20:17.809 [2024-07-13 00:29:04.807147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.809 [2024-07-13 00:29:04.807155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807159] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807163] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb5bd70) 00:20:17.809 [2024-07-13 00:29:04.807169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.809 [2024-07-13 00:29:04.807177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807185] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb5bd70) 00:20:17.809 [2024-07-13 00:29:04.807191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.809 [2024-07-13 00:29:04.807215] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5cd0, cid 5, qid 0 00:20:17.809 [2024-07-13 00:29:04.807223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5b70, cid 4, qid 0 00:20:17.809 [2024-07-13 00:29:04.807228] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5e30, cid 6, qid 0 00:20:17.809 [2024-07-13 00:29:04.807233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5f90, cid 7, qid 0 00:20:17.809 [2024-07-13 00:29:04.807373] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.809 [2024-07-13 00:29:04.807385] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.809 [2024-07-13 00:29:04.807389] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807393] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5bd70): datao=0, datal=8192, cccid=5 00:20:17.809 [2024-07-13 00:29:04.807399] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xba5cd0) on tqpair(0xb5bd70): expected_datao=0, payload_size=8192 00:20:17.809 [2024-07-13 00:29:04.807418] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807424] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807430] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.809 [2024-07-13 00:29:04.807436] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.809 [2024-07-13 00:29:04.807440] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807444] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5bd70): datao=0, datal=512, cccid=4 00:20:17.809 [2024-07-13 00:29:04.807448] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xba5b70) on tqpair(0xb5bd70): expected_datao=0, payload_size=512 00:20:17.809 [2024-07-13 00:29:04.807456] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807459] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807465] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.809 [2024-07-13 00:29:04.807471] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.809 [2024-07-13 00:29:04.807475] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807479] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5bd70): datao=0, datal=512, cccid=6 00:20:17.809 [2024-07-13 00:29:04.807483] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xba5e30) on tqpair(0xb5bd70): expected_datao=0, payload_size=512 00:20:17.809 [2024-07-13 00:29:04.807491] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807494] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807500] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.809 [2024-07-13 00:29:04.807506] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.809 [2024-07-13 00:29:04.807510] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807513] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb5bd70): datao=0, datal=4096, cccid=7 00:20:17.809 [2024-07-13 00:29:04.807518] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xba5f90) on tqpair(0xb5bd70): expected_datao=0, payload_size=4096 00:20:17.809 [2024-07-13 00:29:04.807526] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807531] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807536] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.809 [2024-07-13 00:29:04.807542] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.809 [2024-07-13 00:29:04.807546] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807550] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5cd0) on tqpair=0xb5bd70 00:20:17.809 [2024-07-13 00:29:04.807571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.809 [2024-07-13 00:29:04.807579] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.809 [2024-07-13 00:29:04.807583] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.807587] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5b70) on tqpair=0xb5bd70 00:20:17.809 [2024-07-13 00:29:04.807598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.809 [2024-07-13 00:29:04.807604] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.809 [2024-07-13 00:29:04.807608] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.809 [2024-07-13 00:29:04.811639] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5e30) on tqpair=0xb5bd70 00:20:17.809 [2024-07-13 00:29:04.811664] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.809 [2024-07-13 00:29:04.811672] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.809 [2024-07-13 00:29:04.811676] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.810 [2024-07-13 00:29:04.811679] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5f90) on tqpair=0xb5bd70 00:20:17.810 ===================================================== 00:20:17.810 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.810 ===================================================== 00:20:17.810 Controller Capabilities/Features 00:20:17.810 ================================ 00:20:17.810 Vendor ID: 8086 00:20:17.810 Subsystem Vendor ID: 8086 00:20:17.810 Serial Number: SPDK00000000000001 00:20:17.810 Model Number: SPDK bdev Controller 00:20:17.810 Firmware Version: 24.01.1 00:20:17.810 Recommended Arb Burst: 6 00:20:17.810 IEEE OUI Identifier: e4 d2 5c 00:20:17.810 Multi-path I/O 00:20:17.810 May have multiple subsystem ports: Yes 00:20:17.810 May have multiple controllers: Yes 00:20:17.810 Associated with SR-IOV VF: No 00:20:17.810 Max Data Transfer Size: 131072 00:20:17.810 Max Number of Namespaces: 32 00:20:17.810 Max Number of I/O Queues: 127 00:20:17.810 NVMe Specification Version (VS): 1.3 00:20:17.810 NVMe Specification Version (Identify): 1.3 00:20:17.810 Maximum Queue Entries: 128 00:20:17.810 Contiguous Queues Required: Yes 00:20:17.810 Arbitration Mechanisms Supported 00:20:17.810 Weighted Round Robin: Not Supported 00:20:17.810 Vendor Specific: Not Supported 00:20:17.810 Reset Timeout: 15000 ms 00:20:17.810 Doorbell Stride: 4 bytes 00:20:17.810 NVM Subsystem Reset: Not Supported 00:20:17.810 Command Sets Supported 00:20:17.810 NVM Command Set: Supported 00:20:17.810 Boot Partition: Not Supported 00:20:17.810 Memory Page Size Minimum: 4096 bytes 00:20:17.810 Memory Page Size Maximum: 4096 bytes 00:20:17.810 Persistent Memory Region: Not Supported 00:20:17.810 Optional Asynchronous Events Supported 00:20:17.810 Namespace Attribute Notices: Supported 00:20:17.810 Firmware Activation Notices: Not Supported 00:20:17.810 ANA Change Notices: Not Supported 00:20:17.810 PLE Aggregate Log Change Notices: Not Supported 00:20:17.810 LBA Status Info Alert Notices: Not Supported 00:20:17.810 EGE Aggregate Log Change Notices: Not Supported 00:20:17.810 Normal NVM Subsystem Shutdown event: Not Supported 00:20:17.810 Zone Descriptor Change Notices: Not Supported 00:20:17.810 Discovery Log Change Notices: Not Supported 00:20:17.810 Controller Attributes 00:20:17.810 128-bit Host Identifier: Supported 00:20:17.810 Non-Operational Permissive Mode: Not Supported 00:20:17.810 NVM Sets: Not Supported 00:20:17.810 Read Recovery Levels: Not Supported 00:20:17.810 Endurance Groups: Not Supported 00:20:17.810 Predictable Latency Mode: Not Supported 00:20:17.810 Traffic Based Keep ALive: Not Supported 00:20:17.810 Namespace Granularity: Not Supported 00:20:17.810 SQ Associations: Not Supported 00:20:17.810 UUID List: Not Supported 00:20:17.810 Multi-Domain Subsystem: Not Supported 00:20:17.810 Fixed Capacity Management: Not Supported 00:20:17.810 Variable Capacity Management: Not Supported 00:20:17.810 Delete Endurance Group: Not Supported 00:20:17.810 Delete NVM Set: Not Supported 00:20:17.810 Extended LBA Formats Supported: Not Supported 00:20:17.810 Flexible Data Placement Supported: Not Supported 00:20:17.810 00:20:17.810 Controller Memory Buffer Support 00:20:17.810 ================================ 00:20:17.810 Supported: No 00:20:17.810 00:20:17.810 Persistent Memory Region Support 00:20:17.810 ================================ 00:20:17.810 Supported: No 00:20:17.810 00:20:17.810 Admin Command Set Attributes 00:20:17.810 ============================ 00:20:17.810 Security Send/Receive: Not Supported 00:20:17.810 Format NVM: Not Supported 00:20:17.810 Firmware Activate/Download: Not Supported 00:20:17.810 Namespace Management: Not Supported 00:20:17.810 Device Self-Test: Not Supported 00:20:17.810 Directives: Not Supported 00:20:17.810 NVMe-MI: Not Supported 00:20:17.810 Virtualization Management: Not Supported 00:20:17.810 Doorbell Buffer Config: Not Supported 00:20:17.810 Get LBA Status Capability: Not Supported 00:20:17.810 Command & Feature Lockdown Capability: Not Supported 00:20:17.810 Abort Command Limit: 4 00:20:17.810 Async Event Request Limit: 4 00:20:17.810 Number of Firmware Slots: N/A 00:20:17.810 Firmware Slot 1 Read-Only: N/A 00:20:17.810 Firmware Activation Without Reset: N/A 00:20:17.810 Multiple Update Detection Support: N/A 00:20:17.810 Firmware Update Granularity: No Information Provided 00:20:17.810 Per-Namespace SMART Log: No 00:20:17.810 Asymmetric Namespace Access Log Page: Not Supported 00:20:17.810 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:17.810 Command Effects Log Page: Supported 00:20:17.810 Get Log Page Extended Data: Supported 00:20:17.810 Telemetry Log Pages: Not Supported 00:20:17.810 Persistent Event Log Pages: Not Supported 00:20:17.810 Supported Log Pages Log Page: May Support 00:20:17.810 Commands Supported & Effects Log Page: Not Supported 00:20:17.810 Feature Identifiers & Effects Log Page:May Support 00:20:17.810 NVMe-MI Commands & Effects Log Page: May Support 00:20:17.810 Data Area 4 for Telemetry Log: Not Supported 00:20:17.810 Error Log Page Entries Supported: 128 00:20:17.810 Keep Alive: Supported 00:20:17.810 Keep Alive Granularity: 10000 ms 00:20:17.810 00:20:17.810 NVM Command Set Attributes 00:20:17.810 ========================== 00:20:17.810 Submission Queue Entry Size 00:20:17.810 Max: 64 00:20:17.810 Min: 64 00:20:17.810 Completion Queue Entry Size 00:20:17.810 Max: 16 00:20:17.810 Min: 16 00:20:17.810 Number of Namespaces: 32 00:20:17.810 Compare Command: Supported 00:20:17.810 Write Uncorrectable Command: Not Supported 00:20:17.810 Dataset Management Command: Supported 00:20:17.810 Write Zeroes Command: Supported 00:20:17.810 Set Features Save Field: Not Supported 00:20:17.810 Reservations: Supported 00:20:17.810 Timestamp: Not Supported 00:20:17.810 Copy: Supported 00:20:17.810 Volatile Write Cache: Present 00:20:17.810 Atomic Write Unit (Normal): 1 00:20:17.810 Atomic Write Unit (PFail): 1 00:20:17.810 Atomic Compare & Write Unit: 1 00:20:17.810 Fused Compare & Write: Supported 00:20:17.810 Scatter-Gather List 00:20:17.810 SGL Command Set: Supported 00:20:17.810 SGL Keyed: Supported 00:20:17.810 SGL Bit Bucket Descriptor: Not Supported 00:20:17.810 SGL Metadata Pointer: Not Supported 00:20:17.810 Oversized SGL: Not Supported 00:20:17.810 SGL Metadata Address: Not Supported 00:20:17.810 SGL Offset: Supported 00:20:17.810 Transport SGL Data Block: Not Supported 00:20:17.810 Replay Protected Memory Block: Not Supported 00:20:17.810 00:20:17.810 Firmware Slot Information 00:20:17.810 ========================= 00:20:17.810 Active slot: 1 00:20:17.810 Slot 1 Firmware Revision: 24.01.1 00:20:17.810 00:20:17.810 00:20:17.810 Commands Supported and Effects 00:20:17.810 ============================== 00:20:17.810 Admin Commands 00:20:17.810 -------------- 00:20:17.810 Get Log Page (02h): Supported 00:20:17.810 Identify (06h): Supported 00:20:17.810 Abort (08h): Supported 00:20:17.810 Set Features (09h): Supported 00:20:17.810 Get Features (0Ah): Supported 00:20:17.810 Asynchronous Event Request (0Ch): Supported 00:20:17.810 Keep Alive (18h): Supported 00:20:17.810 I/O Commands 00:20:17.811 ------------ 00:20:17.811 Flush (00h): Supported LBA-Change 00:20:17.811 Write (01h): Supported LBA-Change 00:20:17.811 Read (02h): Supported 00:20:17.811 Compare (05h): Supported 00:20:17.811 Write Zeroes (08h): Supported LBA-Change 00:20:17.811 Dataset Management (09h): Supported LBA-Change 00:20:17.811 Copy (19h): Supported LBA-Change 00:20:17.811 Unknown (79h): Supported LBA-Change 00:20:17.811 Unknown (7Ah): Supported 00:20:17.811 00:20:17.811 Error Log 00:20:17.811 ========= 00:20:17.811 00:20:17.811 Arbitration 00:20:17.811 =========== 00:20:17.811 Arbitration Burst: 1 00:20:17.811 00:20:17.811 Power Management 00:20:17.811 ================ 00:20:17.811 Number of Power States: 1 00:20:17.811 Current Power State: Power State #0 00:20:17.811 Power State #0: 00:20:17.811 Max Power: 0.00 W 00:20:17.811 Non-Operational State: Operational 00:20:17.811 Entry Latency: Not Reported 00:20:17.811 Exit Latency: Not Reported 00:20:17.811 Relative Read Throughput: 0 00:20:17.811 Relative Read Latency: 0 00:20:17.811 Relative Write Throughput: 0 00:20:17.811 Relative Write Latency: 0 00:20:17.811 Idle Power: Not Reported 00:20:17.811 Active Power: Not Reported 00:20:17.811 Non-Operational Permissive Mode: Not Supported 00:20:17.811 00:20:17.811 Health Information 00:20:17.811 ================== 00:20:17.811 Critical Warnings: 00:20:17.811 Available Spare Space: OK 00:20:17.811 Temperature: OK 00:20:17.811 Device Reliability: OK 00:20:17.811 Read Only: No 00:20:17.811 Volatile Memory Backup: OK 00:20:17.811 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:17.811 Temperature Threshold: [2024-07-13 00:29:04.811802] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.811810] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.811814] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb5bd70) 00:20:17.811 [2024-07-13 00:29:04.811822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.811 [2024-07-13 00:29:04.811852] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5f90, cid 7, qid 0 00:20:17.811 [2024-07-13 00:29:04.811932] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.811 [2024-07-13 00:29:04.811939] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.811 [2024-07-13 00:29:04.811943] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.811947] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5f90) on tqpair=0xb5bd70 00:20:17.811 [2024-07-13 00:29:04.811986] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:17.811 [2024-07-13 00:29:04.812001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.811 [2024-07-13 00:29:04.812009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.811 [2024-07-13 00:29:04.812015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.811 [2024-07-13 00:29:04.812022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.811 [2024-07-13 00:29:04.812032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812036] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812040] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.811 [2024-07-13 00:29:04.812049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.811 [2024-07-13 00:29:04.812075] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.811 [2024-07-13 00:29:04.812134] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.811 [2024-07-13 00:29:04.812141] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.811 [2024-07-13 00:29:04.812145] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812149] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.811 [2024-07-13 00:29:04.812157] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812162] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812166] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.811 [2024-07-13 00:29:04.812174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.811 [2024-07-13 00:29:04.812199] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.811 [2024-07-13 00:29:04.812283] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.811 [2024-07-13 00:29:04.812290] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.811 [2024-07-13 00:29:04.812294] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812298] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.811 [2024-07-13 00:29:04.812304] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:17.811 [2024-07-13 00:29:04.812309] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:17.811 [2024-07-13 00:29:04.812319] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812324] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812328] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.811 [2024-07-13 00:29:04.812336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.811 [2024-07-13 00:29:04.812357] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.811 [2024-07-13 00:29:04.812423] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.811 [2024-07-13 00:29:04.812430] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.811 [2024-07-13 00:29:04.812434] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812438] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.811 [2024-07-13 00:29:04.812449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.811 [2024-07-13 00:29:04.812466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.811 [2024-07-13 00:29:04.812486] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.811 [2024-07-13 00:29:04.812547] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.811 [2024-07-13 00:29:04.812554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.811 [2024-07-13 00:29:04.812558] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812562] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.811 [2024-07-13 00:29:04.812573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.811 [2024-07-13 00:29:04.812582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.811 [2024-07-13 00:29:04.812589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.812646] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.812702] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.812710] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.812714] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.812718] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.812729] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.812734] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.812738] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.812746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.812767] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.812829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.812836] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.812840] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.812844] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.812856] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.812860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.812864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.812871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.812891] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.812955] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.812964] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.812968] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.812972] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.812983] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.812988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.812992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.812999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.813020] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.813097] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.813104] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.813108] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813113] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.813124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813129] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813133] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.813140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.813160] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.813216] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.813232] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.813237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813242] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.813254] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813259] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813263] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.813271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.813293] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.813347] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.813354] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.813358] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.813373] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813378] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813382] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.813389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.813410] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.813464] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.813476] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.813480] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813485] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.813496] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813501] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.813513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.813534] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.813595] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.813602] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.813606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813610] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.813634] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813640] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813644] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.813652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.813674] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.813731] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.813738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.813742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813746] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.813758] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.813774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.813794] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.813851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.813858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.813861] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813866] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.813877] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813882] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813886] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.813893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.813914] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.813967] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.813974] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.813977] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813982] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.813993] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.813998] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.814001] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.814009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.814029] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.814093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.814105] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.814110] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.814114] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.814126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.814131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.814135] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.812 [2024-07-13 00:29:04.814142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.812 [2024-07-13 00:29:04.814163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.812 [2024-07-13 00:29:04.814215] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.812 [2024-07-13 00:29:04.814236] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.812 [2024-07-13 00:29:04.814241] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.812 [2024-07-13 00:29:04.814245] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.812 [2024-07-13 00:29:04.814257] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814262] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814266] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.813 [2024-07-13 00:29:04.814274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.813 [2024-07-13 00:29:04.814295] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.813 [2024-07-13 00:29:04.814348] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.813 [2024-07-13 00:29:04.814360] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.813 [2024-07-13 00:29:04.814364] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814368] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.813 [2024-07-13 00:29:04.814380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814385] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814389] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.813 [2024-07-13 00:29:04.814397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.813 [2024-07-13 00:29:04.814418] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.813 [2024-07-13 00:29:04.814478] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.813 [2024-07-13 00:29:04.814485] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.813 [2024-07-13 00:29:04.814489] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814493] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.813 [2024-07-13 00:29:04.814504] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814509] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814513] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.813 [2024-07-13 00:29:04.814520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.813 [2024-07-13 00:29:04.814541] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.813 [2024-07-13 00:29:04.814596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.813 [2024-07-13 00:29:04.814603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.813 [2024-07-13 00:29:04.814607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814623] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.813 [2024-07-13 00:29:04.814636] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814641] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814646] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.813 [2024-07-13 00:29:04.814653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.813 [2024-07-13 00:29:04.814675] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.813 [2024-07-13 00:29:04.814745] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.813 [2024-07-13 00:29:04.814752] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.813 [2024-07-13 00:29:04.814756] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814760] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.813 [2024-07-13 00:29:04.814771] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814776] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814780] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.813 [2024-07-13 00:29:04.814787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.813 [2024-07-13 00:29:04.814808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.813 [2024-07-13 00:29:04.814862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.813 [2024-07-13 00:29:04.814869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.813 [2024-07-13 00:29:04.814873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814877] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.813 [2024-07-13 00:29:04.814888] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814893] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814897] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.813 [2024-07-13 00:29:04.814904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.813 [2024-07-13 00:29:04.814925] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.813 [2024-07-13 00:29:04.814977] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.813 [2024-07-13 00:29:04.814984] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.813 [2024-07-13 00:29:04.814988] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.814992] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.813 [2024-07-13 00:29:04.815003] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815008] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815012] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.813 [2024-07-13 00:29:04.815019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.813 [2024-07-13 00:29:04.815040] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.813 [2024-07-13 00:29:04.815100] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.813 [2024-07-13 00:29:04.815107] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.813 [2024-07-13 00:29:04.815110] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815115] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.813 [2024-07-13 00:29:04.815126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.813 [2024-07-13 00:29:04.815142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.813 [2024-07-13 00:29:04.815161] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.813 [2024-07-13 00:29:04.815218] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.813 [2024-07-13 00:29:04.815225] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.813 [2024-07-13 00:29:04.815228] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815232] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.813 [2024-07-13 00:29:04.815244] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815248] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815252] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.813 [2024-07-13 00:29:04.815260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.813 [2024-07-13 00:29:04.815280] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.813 [2024-07-13 00:29:04.815345] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.813 [2024-07-13 00:29:04.815352] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.813 [2024-07-13 00:29:04.815356] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.813 [2024-07-13 00:29:04.815371] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.813 [2024-07-13 00:29:04.815387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.813 [2024-07-13 00:29:04.815408] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.813 [2024-07-13 00:29:04.815465] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.813 [2024-07-13 00:29:04.815473] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.813 [2024-07-13 00:29:04.815477] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.813 [2024-07-13 00:29:04.815493] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815498] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815501] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.813 [2024-07-13 00:29:04.815509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.813 [2024-07-13 00:29:04.815530] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.813 [2024-07-13 00:29:04.815581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.813 [2024-07-13 00:29:04.815593] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.813 [2024-07-13 00:29:04.815597] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.815602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.813 [2024-07-13 00:29:04.819626] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.819646] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.813 [2024-07-13 00:29:04.819651] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb5bd70) 00:20:17.814 [2024-07-13 00:29:04.819676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.814 [2024-07-13 00:29:04.819706] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xba5a10, cid 3, qid 0 00:20:17.814 [2024-07-13 00:29:04.819771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.814 [2024-07-13 00:29:04.819779] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.814 [2024-07-13 00:29:04.819783] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.814 [2024-07-13 00:29:04.819787] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xba5a10) on tqpair=0xb5bd70 00:20:17.814 [2024-07-13 00:29:04.819796] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:17.814 0 Kelvin (-273 Celsius) 00:20:17.814 Available Spare: 0% 00:20:17.814 Available Spare Threshold: 0% 00:20:17.814 Life Percentage Used: 0% 00:20:17.814 Data Units Read: 0 00:20:17.814 Data Units Written: 0 00:20:17.814 Host Read Commands: 0 00:20:17.814 Host Write Commands: 0 00:20:17.814 Controller Busy Time: 0 minutes 00:20:17.814 Power Cycles: 0 00:20:17.814 Power On Hours: 0 hours 00:20:17.814 Unsafe Shutdowns: 0 00:20:17.814 Unrecoverable Media Errors: 0 00:20:17.814 Lifetime Error Log Entries: 0 00:20:17.814 Warning Temperature Time: 0 minutes 00:20:17.814 Critical Temperature Time: 0 minutes 00:20:17.814 00:20:17.814 Number of Queues 00:20:17.814 ================ 00:20:17.814 Number of I/O Submission Queues: 127 00:20:17.814 Number of I/O Completion Queues: 127 00:20:17.814 00:20:17.814 Active Namespaces 00:20:17.814 ================= 00:20:17.814 Namespace ID:1 00:20:17.814 Error Recovery Timeout: Unlimited 00:20:17.814 Command Set Identifier: NVM (00h) 00:20:17.814 Deallocate: Supported 00:20:17.814 Deallocated/Unwritten Error: Not Supported 00:20:17.814 Deallocated Read Value: Unknown 00:20:17.814 Deallocate in Write Zeroes: Not Supported 00:20:17.814 Deallocated Guard Field: 0xFFFF 00:20:17.814 Flush: Supported 00:20:17.814 Reservation: Supported 00:20:17.814 Namespace Sharing Capabilities: Multiple Controllers 00:20:17.814 Size (in LBAs): 131072 (0GiB) 00:20:17.814 Capacity (in LBAs): 131072 (0GiB) 00:20:17.814 Utilization (in LBAs): 131072 (0GiB) 00:20:17.814 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:17.814 EUI64: ABCDEF0123456789 00:20:17.814 UUID: 4d6c8687-e073-4c55-9020-27a18ad8c9dd 00:20:17.814 Thin Provisioning: Not Supported 00:20:17.814 Per-NS Atomic Units: Yes 00:20:17.814 Atomic Boundary Size (Normal): 0 00:20:17.814 Atomic Boundary Size (PFail): 0 00:20:17.814 Atomic Boundary Offset: 0 00:20:17.814 Maximum Single Source Range Length: 65535 00:20:17.814 Maximum Copy Length: 65535 00:20:17.814 Maximum Source Range Count: 1 00:20:17.814 NGUID/EUI64 Never Reused: No 00:20:17.814 Namespace Write Protected: No 00:20:17.814 Number of LBA Formats: 1 00:20:17.814 Current LBA Format: LBA Format #00 00:20:17.814 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:17.814 00:20:17.814 00:29:04 -- host/identify.sh@51 -- # sync 00:20:17.814 00:29:04 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.814 00:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.814 00:29:04 -- common/autotest_common.sh@10 -- # set +x 00:20:17.814 00:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.814 00:29:04 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:17.814 00:29:04 -- host/identify.sh@56 -- # nvmftestfini 00:20:17.814 00:29:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:17.814 00:29:04 -- nvmf/common.sh@116 -- # sync 00:20:17.814 00:29:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:17.814 00:29:04 -- nvmf/common.sh@119 -- # set +e 00:20:17.814 00:29:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:17.814 00:29:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:17.814 rmmod nvme_tcp 00:20:17.814 rmmod nvme_fabrics 00:20:17.814 rmmod nvme_keyring 00:20:17.814 00:29:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:17.814 00:29:04 -- nvmf/common.sh@123 -- # set -e 00:20:17.814 00:29:04 -- nvmf/common.sh@124 -- # return 0 00:20:17.814 00:29:04 -- nvmf/common.sh@477 -- # '[' -n 92968 ']' 00:20:17.814 00:29:04 -- nvmf/common.sh@478 -- # killprocess 92968 00:20:17.814 00:29:04 -- common/autotest_common.sh@926 -- # '[' -z 92968 ']' 00:20:17.814 00:29:04 -- common/autotest_common.sh@930 -- # kill -0 92968 00:20:17.814 00:29:04 -- common/autotest_common.sh@931 -- # uname 00:20:17.814 00:29:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.814 00:29:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92968 00:20:17.814 killing process with pid 92968 00:20:17.814 00:29:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:17.814 00:29:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:17.814 00:29:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92968' 00:20:17.814 00:29:04 -- common/autotest_common.sh@945 -- # kill 92968 00:20:17.814 [2024-07-13 00:29:04.994295] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:17.814 00:29:04 -- common/autotest_common.sh@950 -- # wait 92968 00:20:18.073 00:29:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:18.073 00:29:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:18.073 00:29:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:18.073 00:29:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.073 00:29:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:18.073 00:29:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.073 00:29:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.073 00:29:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.332 00:29:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:18.332 00:20:18.332 real 0m2.696s 00:20:18.332 user 0m7.388s 00:20:18.332 sys 0m0.722s 00:20:18.332 00:29:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:18.332 00:29:05 -- common/autotest_common.sh@10 -- # set +x 00:20:18.332 ************************************ 00:20:18.332 END TEST nvmf_identify 00:20:18.332 ************************************ 00:20:18.332 00:29:05 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:18.332 00:29:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:18.332 00:29:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:18.332 00:29:05 -- common/autotest_common.sh@10 -- # set +x 00:20:18.332 ************************************ 00:20:18.332 START TEST nvmf_perf 00:20:18.332 ************************************ 00:20:18.332 00:29:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:18.332 * Looking for test storage... 00:20:18.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:18.332 00:29:05 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:18.332 00:29:05 -- nvmf/common.sh@7 -- # uname -s 00:20:18.332 00:29:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.332 00:29:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.332 00:29:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.332 00:29:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.332 00:29:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.332 00:29:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.332 00:29:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.332 00:29:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.332 00:29:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.332 00:29:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.332 00:29:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:20:18.332 00:29:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:20:18.332 00:29:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.332 00:29:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.332 00:29:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:18.332 00:29:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:18.332 00:29:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.332 00:29:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.332 00:29:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.332 00:29:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.332 00:29:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.332 00:29:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.332 00:29:05 -- paths/export.sh@5 -- # export PATH 00:20:18.332 00:29:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.332 00:29:05 -- nvmf/common.sh@46 -- # : 0 00:20:18.332 00:29:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:18.332 00:29:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:18.332 00:29:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:18.332 00:29:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.332 00:29:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.332 00:29:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:18.332 00:29:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:18.332 00:29:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:18.332 00:29:05 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:18.332 00:29:05 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:18.332 00:29:05 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.332 00:29:05 -- host/perf.sh@17 -- # nvmftestinit 00:20:18.332 00:29:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:18.332 00:29:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.332 00:29:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:18.332 00:29:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:18.332 00:29:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:18.332 00:29:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.332 00:29:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.332 00:29:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.332 00:29:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:18.332 00:29:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:18.332 00:29:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:18.332 00:29:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:18.332 00:29:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:18.332 00:29:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:18.332 00:29:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.332 00:29:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.332 00:29:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:18.332 00:29:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:18.332 00:29:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:18.332 00:29:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:18.332 00:29:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:18.332 00:29:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.332 00:29:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:18.332 00:29:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:18.332 00:29:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:18.332 00:29:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:18.332 00:29:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:18.332 00:29:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:18.332 Cannot find device "nvmf_tgt_br" 00:20:18.332 00:29:05 -- nvmf/common.sh@154 -- # true 00:20:18.332 00:29:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.332 Cannot find device "nvmf_tgt_br2" 00:20:18.332 00:29:05 -- nvmf/common.sh@155 -- # true 00:20:18.332 00:29:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:18.332 00:29:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:18.332 Cannot find device "nvmf_tgt_br" 00:20:18.332 00:29:05 -- nvmf/common.sh@157 -- # true 00:20:18.332 00:29:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:18.332 Cannot find device "nvmf_tgt_br2" 00:20:18.332 00:29:05 -- nvmf/common.sh@158 -- # true 00:20:18.332 00:29:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:18.591 00:29:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:18.591 00:29:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.591 00:29:05 -- nvmf/common.sh@161 -- # true 00:20:18.591 00:29:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.591 00:29:05 -- nvmf/common.sh@162 -- # true 00:20:18.591 00:29:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:18.591 00:29:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:18.591 00:29:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:18.591 00:29:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:18.591 00:29:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:18.591 00:29:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:18.591 00:29:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:18.591 00:29:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:18.591 00:29:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:18.591 00:29:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:18.591 00:29:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:18.591 00:29:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:18.591 00:29:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:18.591 00:29:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:18.591 00:29:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:18.591 00:29:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:18.591 00:29:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:18.591 00:29:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:18.591 00:29:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:18.591 00:29:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:18.591 00:29:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:18.591 00:29:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:18.591 00:29:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:18.591 00:29:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:18.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:20:18.591 00:20:18.591 --- 10.0.0.2 ping statistics --- 00:20:18.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.591 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:20:18.591 00:29:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:18.591 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:18.591 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:20:18.591 00:20:18.591 --- 10.0.0.3 ping statistics --- 00:20:18.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.591 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:18.592 00:29:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:18.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:20:18.592 00:20:18.592 --- 10.0.0.1 ping statistics --- 00:20:18.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.592 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:18.592 00:29:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.592 00:29:05 -- nvmf/common.sh@421 -- # return 0 00:20:18.592 00:29:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:18.592 00:29:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.592 00:29:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:18.592 00:29:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:18.592 00:29:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.592 00:29:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:18.592 00:29:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:18.592 00:29:05 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:18.592 00:29:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:18.592 00:29:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:18.592 00:29:05 -- common/autotest_common.sh@10 -- # set +x 00:20:18.851 00:29:05 -- nvmf/common.sh@469 -- # nvmfpid=93187 00:20:18.851 00:29:05 -- nvmf/common.sh@470 -- # waitforlisten 93187 00:20:18.851 00:29:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:18.851 00:29:05 -- common/autotest_common.sh@819 -- # '[' -z 93187 ']' 00:20:18.851 00:29:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.851 00:29:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:18.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.851 00:29:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.851 00:29:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:18.851 00:29:05 -- common/autotest_common.sh@10 -- # set +x 00:20:18.851 [2024-07-13 00:29:05.883359] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:18.851 [2024-07-13 00:29:05.884041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.851 [2024-07-13 00:29:06.026797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.111 [2024-07-13 00:29:06.128705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:19.111 [2024-07-13 00:29:06.128872] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.111 [2024-07-13 00:29:06.128888] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.111 [2024-07-13 00:29:06.128899] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.111 [2024-07-13 00:29:06.129401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.111 [2024-07-13 00:29:06.129585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.111 [2024-07-13 00:29:06.129706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.111 [2024-07-13 00:29:06.129757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.677 00:29:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:19.677 00:29:06 -- common/autotest_common.sh@852 -- # return 0 00:20:19.677 00:29:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:19.677 00:29:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:19.677 00:29:06 -- common/autotest_common.sh@10 -- # set +x 00:20:19.935 00:29:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.935 00:29:06 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:19.935 00:29:06 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:20.194 00:29:07 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:20.194 00:29:07 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:20.452 00:29:07 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:20.452 00:29:07 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:20.711 00:29:07 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:20.711 00:29:07 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:20.711 00:29:07 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:20.711 00:29:07 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:20.711 00:29:07 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:20.970 [2024-07-13 00:29:08.006366] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.970 00:29:08 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:21.229 00:29:08 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:21.229 00:29:08 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:21.488 00:29:08 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:21.488 00:29:08 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:21.748 00:29:08 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:21.748 [2024-07-13 00:29:08.940312] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.748 00:29:08 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:22.007 00:29:09 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:22.007 00:29:09 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:22.007 00:29:09 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:22.007 00:29:09 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:23.384 Initializing NVMe Controllers 00:20:23.384 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:23.384 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:23.384 Initialization complete. Launching workers. 00:20:23.384 ======================================================== 00:20:23.384 Latency(us) 00:20:23.384 Device Information : IOPS MiB/s Average min max 00:20:23.384 PCIE (0000:00:06.0) NSID 1 from core 0: 20468.56 79.96 1563.34 440.61 7242.12 00:20:23.384 ======================================================== 00:20:23.384 Total : 20468.56 79.96 1563.34 440.61 7242.12 00:20:23.384 00:20:23.384 00:29:10 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:24.766 Initializing NVMe Controllers 00:20:24.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:24.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:24.766 Initialization complete. Launching workers. 00:20:24.766 ======================================================== 00:20:24.766 Latency(us) 00:20:24.766 Device Information : IOPS MiB/s Average min max 00:20:24.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3118.29 12.18 320.45 119.24 4274.01 00:20:24.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.93 5962.39 12061.13 00:20:24.766 ======================================================== 00:20:24.766 Total : 3241.79 12.66 619.13 119.24 12061.13 00:20:24.766 00:20:24.766 00:29:11 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:26.144 Initializing NVMe Controllers 00:20:26.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:26.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:26.144 Initialization complete. Launching workers. 00:20:26.144 ======================================================== 00:20:26.144 Latency(us) 00:20:26.144 Device Information : IOPS MiB/s Average min max 00:20:26.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9110.00 35.59 3516.65 602.80 7482.89 00:20:26.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2725.00 10.64 11856.59 4788.85 20105.80 00:20:26.144 ======================================================== 00:20:26.144 Total : 11835.00 46.23 5436.91 602.80 20105.80 00:20:26.144 00:20:26.144 00:29:13 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:26.144 00:29:13 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:28.674 Initializing NVMe Controllers 00:20:28.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.674 Controller IO queue size 128, less than required. 00:20:28.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.674 Controller IO queue size 128, less than required. 00:20:28.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:28.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:28.674 Initialization complete. Launching workers. 00:20:28.674 ======================================================== 00:20:28.674 Latency(us) 00:20:28.674 Device Information : IOPS MiB/s Average min max 00:20:28.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1675.27 418.82 77516.92 50268.02 148712.16 00:20:28.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 546.43 136.61 243559.24 82861.84 374659.15 00:20:28.674 ======================================================== 00:20:28.674 Total : 2221.70 555.42 118354.96 50268.02 374659.15 00:20:28.674 00:20:28.674 00:29:15 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:28.674 No valid NVMe controllers or AIO or URING devices found 00:20:28.674 Initializing NVMe Controllers 00:20:28.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.674 Controller IO queue size 128, less than required. 00:20:28.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.674 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:28.674 Controller IO queue size 128, less than required. 00:20:28.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.674 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:28.674 WARNING: Some requested NVMe devices were skipped 00:20:28.674 00:29:15 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:31.203 Initializing NVMe Controllers 00:20:31.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.203 Controller IO queue size 128, less than required. 00:20:31.203 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:31.203 Controller IO queue size 128, less than required. 00:20:31.203 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:31.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:31.203 Initialization complete. Launching workers. 00:20:31.203 00:20:31.203 ==================== 00:20:31.203 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:31.203 TCP transport: 00:20:31.203 polls: 6802 00:20:31.203 idle_polls: 4651 00:20:31.203 sock_completions: 2151 00:20:31.203 nvme_completions: 3916 00:20:31.203 submitted_requests: 6035 00:20:31.203 queued_requests: 1 00:20:31.203 00:20:31.203 ==================== 00:20:31.203 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:31.203 TCP transport: 00:20:31.203 polls: 6851 00:20:31.203 idle_polls: 4515 00:20:31.203 sock_completions: 2336 00:20:31.203 nvme_completions: 4549 00:20:31.203 submitted_requests: 6945 00:20:31.203 queued_requests: 1 00:20:31.203 ======================================================== 00:20:31.203 Latency(us) 00:20:31.203 Device Information : IOPS MiB/s Average min max 00:20:31.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1042.34 260.59 125383.56 95760.81 220550.92 00:20:31.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1200.82 300.20 108163.36 57432.78 138478.07 00:20:31.203 ======================================================== 00:20:31.203 Total : 2243.16 560.79 116165.17 57432.78 220550.92 00:20:31.203 00:20:31.203 00:29:18 -- host/perf.sh@66 -- # sync 00:20:31.203 00:29:18 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:31.461 00:29:18 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:31.461 00:29:18 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:31.461 00:29:18 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:31.719 00:29:18 -- host/perf.sh@72 -- # ls_guid=c7385e42-6215-43b5-b485-85662329e454 00:20:31.719 00:29:18 -- host/perf.sh@73 -- # get_lvs_free_mb c7385e42-6215-43b5-b485-85662329e454 00:20:31.719 00:29:18 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c7385e42-6215-43b5-b485-85662329e454 00:20:31.719 00:29:18 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:31.719 00:29:18 -- common/autotest_common.sh@1345 -- # local fc 00:20:31.719 00:29:18 -- common/autotest_common.sh@1346 -- # local cs 00:20:31.719 00:29:18 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:31.977 00:29:19 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:31.977 { 00:20:31.977 "base_bdev": "Nvme0n1", 00:20:31.977 "block_size": 4096, 00:20:31.977 "cluster_size": 4194304, 00:20:31.977 "free_clusters": 1278, 00:20:31.977 "name": "lvs_0", 00:20:31.977 "total_data_clusters": 1278, 00:20:31.977 "uuid": "c7385e42-6215-43b5-b485-85662329e454" 00:20:31.977 } 00:20:31.977 ]' 00:20:31.977 00:29:19 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c7385e42-6215-43b5-b485-85662329e454") .free_clusters' 00:20:31.977 00:29:19 -- common/autotest_common.sh@1348 -- # fc=1278 00:20:31.977 00:29:19 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c7385e42-6215-43b5-b485-85662329e454") .cluster_size' 00:20:32.235 5112 00:20:32.235 00:29:19 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:32.235 00:29:19 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:20:32.235 00:29:19 -- common/autotest_common.sh@1353 -- # echo 5112 00:20:32.235 00:29:19 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:32.235 00:29:19 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c7385e42-6215-43b5-b485-85662329e454 lbd_0 5112 00:20:32.492 00:29:19 -- host/perf.sh@80 -- # lb_guid=91db23b9-6daf-417d-903f-5b5742a34cbd 00:20:32.492 00:29:19 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 91db23b9-6daf-417d-903f-5b5742a34cbd lvs_n_0 00:20:32.750 00:29:19 -- host/perf.sh@83 -- # ls_nested_guid=afeb21f1-25c1-448a-95d6-da4cc6ad90f6 00:20:32.750 00:29:19 -- host/perf.sh@84 -- # get_lvs_free_mb afeb21f1-25c1-448a-95d6-da4cc6ad90f6 00:20:32.750 00:29:19 -- common/autotest_common.sh@1343 -- # local lvs_uuid=afeb21f1-25c1-448a-95d6-da4cc6ad90f6 00:20:32.750 00:29:19 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:32.750 00:29:19 -- common/autotest_common.sh@1345 -- # local fc 00:20:32.750 00:29:19 -- common/autotest_common.sh@1346 -- # local cs 00:20:32.750 00:29:19 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:33.009 00:29:20 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:33.009 { 00:20:33.009 "base_bdev": "Nvme0n1", 00:20:33.009 "block_size": 4096, 00:20:33.009 "cluster_size": 4194304, 00:20:33.009 "free_clusters": 0, 00:20:33.009 "name": "lvs_0", 00:20:33.009 "total_data_clusters": 1278, 00:20:33.009 "uuid": "c7385e42-6215-43b5-b485-85662329e454" 00:20:33.009 }, 00:20:33.009 { 00:20:33.009 "base_bdev": "91db23b9-6daf-417d-903f-5b5742a34cbd", 00:20:33.009 "block_size": 4096, 00:20:33.009 "cluster_size": 4194304, 00:20:33.009 "free_clusters": 1276, 00:20:33.009 "name": "lvs_n_0", 00:20:33.009 "total_data_clusters": 1276, 00:20:33.009 "uuid": "afeb21f1-25c1-448a-95d6-da4cc6ad90f6" 00:20:33.009 } 00:20:33.009 ]' 00:20:33.009 00:29:20 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="afeb21f1-25c1-448a-95d6-da4cc6ad90f6") .free_clusters' 00:20:33.268 00:29:20 -- common/autotest_common.sh@1348 -- # fc=1276 00:20:33.268 00:29:20 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="afeb21f1-25c1-448a-95d6-da4cc6ad90f6") .cluster_size' 00:20:33.268 00:29:20 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:33.268 00:29:20 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:20:33.268 00:29:20 -- common/autotest_common.sh@1353 -- # echo 5104 00:20:33.268 5104 00:20:33.268 00:29:20 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:33.268 00:29:20 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u afeb21f1-25c1-448a-95d6-da4cc6ad90f6 lbd_nest_0 5104 00:20:33.606 00:29:20 -- host/perf.sh@88 -- # lb_nested_guid=15221552-b34b-400b-b70e-5db533c17411 00:20:33.606 00:29:20 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:33.606 00:29:20 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:33.606 00:29:20 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 15221552-b34b-400b-b70e-5db533c17411 00:20:33.879 00:29:21 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:34.138 00:29:21 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:34.138 00:29:21 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:34.138 00:29:21 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:34.138 00:29:21 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:34.138 00:29:21 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.396 No valid NVMe controllers or AIO or URING devices found 00:20:34.396 Initializing NVMe Controllers 00:20:34.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.396 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:34.396 WARNING: Some requested NVMe devices were skipped 00:20:34.396 00:29:21 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:34.396 00:29:21 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.613 Initializing NVMe Controllers 00:20:46.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:46.613 Initialization complete. Launching workers. 00:20:46.613 ======================================================== 00:20:46.613 Latency(us) 00:20:46.613 Device Information : IOPS MiB/s Average min max 00:20:46.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 760.20 95.02 1314.66 423.00 7554.07 00:20:46.613 ======================================================== 00:20:46.613 Total : 760.20 95.02 1314.66 423.00 7554.07 00:20:46.614 00:20:46.614 00:29:31 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:46.614 00:29:31 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:46.614 00:29:31 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.614 No valid NVMe controllers or AIO or URING devices found 00:20:46.614 Initializing NVMe Controllers 00:20:46.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.614 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:46.614 WARNING: Some requested NVMe devices were skipped 00:20:46.614 00:29:32 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:46.614 00:29:32 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:56.588 Initializing NVMe Controllers 00:20:56.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:56.588 Initialization complete. Launching workers. 00:20:56.588 ======================================================== 00:20:56.588 Latency(us) 00:20:56.588 Device Information : IOPS MiB/s Average min max 00:20:56.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1149.32 143.66 27881.10 8058.55 72048.18 00:20:56.588 ======================================================== 00:20:56.588 Total : 1149.32 143.66 27881.10 8058.55 72048.18 00:20:56.588 00:20:56.588 00:29:42 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:56.588 00:29:42 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:56.588 00:29:42 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:56.588 No valid NVMe controllers or AIO or URING devices found 00:20:56.588 Initializing NVMe Controllers 00:20:56.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.588 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:56.588 WARNING: Some requested NVMe devices were skipped 00:20:56.588 00:29:42 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:56.588 00:29:42 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:06.567 Initializing NVMe Controllers 00:21:06.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.567 Controller IO queue size 128, less than required. 00:21:06.567 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:06.567 Initialization complete. Launching workers. 00:21:06.567 ======================================================== 00:21:06.567 Latency(us) 00:21:06.567 Device Information : IOPS MiB/s Average min max 00:21:06.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3464.45 433.06 37006.60 10728.52 75866.68 00:21:06.567 ======================================================== 00:21:06.567 Total : 3464.45 433.06 37006.60 10728.52 75866.68 00:21:06.567 00:21:06.567 00:29:53 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:06.567 00:29:53 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 15221552-b34b-400b-b70e-5db533c17411 00:21:06.567 00:29:53 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:06.825 00:29:53 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 91db23b9-6daf-417d-903f-5b5742a34cbd 00:21:07.083 00:29:54 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:07.342 00:29:54 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:07.342 00:29:54 -- host/perf.sh@114 -- # nvmftestfini 00:21:07.342 00:29:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:07.342 00:29:54 -- nvmf/common.sh@116 -- # sync 00:21:07.342 00:29:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:07.342 00:29:54 -- nvmf/common.sh@119 -- # set +e 00:21:07.342 00:29:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:07.342 00:29:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:07.342 rmmod nvme_tcp 00:21:07.342 rmmod nvme_fabrics 00:21:07.342 rmmod nvme_keyring 00:21:07.342 00:29:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:07.342 00:29:54 -- nvmf/common.sh@123 -- # set -e 00:21:07.342 00:29:54 -- nvmf/common.sh@124 -- # return 0 00:21:07.342 00:29:54 -- nvmf/common.sh@477 -- # '[' -n 93187 ']' 00:21:07.342 00:29:54 -- nvmf/common.sh@478 -- # killprocess 93187 00:21:07.342 00:29:54 -- common/autotest_common.sh@926 -- # '[' -z 93187 ']' 00:21:07.342 00:29:54 -- common/autotest_common.sh@930 -- # kill -0 93187 00:21:07.342 00:29:54 -- common/autotest_common.sh@931 -- # uname 00:21:07.342 00:29:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:07.342 00:29:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93187 00:21:07.342 killing process with pid 93187 00:21:07.342 00:29:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:07.342 00:29:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:07.342 00:29:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93187' 00:21:07.342 00:29:54 -- common/autotest_common.sh@945 -- # kill 93187 00:21:07.342 00:29:54 -- common/autotest_common.sh@950 -- # wait 93187 00:21:09.246 00:29:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:09.246 00:29:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:09.246 00:29:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:09.246 00:29:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.246 00:29:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:09.246 00:29:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.246 00:29:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.246 00:29:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.246 00:29:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:09.246 ************************************ 00:21:09.246 END TEST nvmf_perf 00:21:09.246 ************************************ 00:21:09.246 00:21:09.246 real 0m50.925s 00:21:09.246 user 3m11.021s 00:21:09.246 sys 0m11.528s 00:21:09.246 00:29:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:09.246 00:29:56 -- common/autotest_common.sh@10 -- # set +x 00:21:09.246 00:29:56 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:09.246 00:29:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:09.247 00:29:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:09.247 00:29:56 -- common/autotest_common.sh@10 -- # set +x 00:21:09.247 ************************************ 00:21:09.247 START TEST nvmf_fio_host 00:21:09.247 ************************************ 00:21:09.247 00:29:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:09.247 * Looking for test storage... 00:21:09.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:09.247 00:29:56 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:09.247 00:29:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.247 00:29:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.247 00:29:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.247 00:29:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.247 00:29:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.247 00:29:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.247 00:29:56 -- paths/export.sh@5 -- # export PATH 00:21:09.247 00:29:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.247 00:29:56 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:09.247 00:29:56 -- nvmf/common.sh@7 -- # uname -s 00:21:09.247 00:29:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.247 00:29:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.247 00:29:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.247 00:29:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.247 00:29:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.247 00:29:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.247 00:29:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.247 00:29:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.247 00:29:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.247 00:29:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.247 00:29:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:21:09.247 00:29:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:21:09.247 00:29:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.247 00:29:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.247 00:29:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:09.247 00:29:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:09.247 00:29:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.247 00:29:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.247 00:29:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.247 00:29:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.247 00:29:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.247 00:29:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.247 00:29:56 -- paths/export.sh@5 -- # export PATH 00:21:09.247 00:29:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.247 00:29:56 -- nvmf/common.sh@46 -- # : 0 00:21:09.247 00:29:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:09.247 00:29:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:09.247 00:29:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:09.247 00:29:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.247 00:29:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.247 00:29:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:09.247 00:29:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:09.247 00:29:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:09.247 00:29:56 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:09.247 00:29:56 -- host/fio.sh@14 -- # nvmftestinit 00:21:09.247 00:29:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:09.247 00:29:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.247 00:29:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:09.247 00:29:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:09.247 00:29:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:09.247 00:29:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.247 00:29:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.247 00:29:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.247 00:29:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:09.247 00:29:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:09.247 00:29:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:09.247 00:29:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:09.247 00:29:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:09.247 00:29:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:09.247 00:29:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.247 00:29:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.247 00:29:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:09.247 00:29:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:09.247 00:29:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:09.247 00:29:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:09.247 00:29:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:09.247 00:29:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.247 00:29:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:09.247 00:29:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:09.247 00:29:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:09.247 00:29:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:09.247 00:29:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:09.507 00:29:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:09.507 Cannot find device "nvmf_tgt_br" 00:21:09.507 00:29:56 -- nvmf/common.sh@154 -- # true 00:21:09.507 00:29:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:09.507 Cannot find device "nvmf_tgt_br2" 00:21:09.507 00:29:56 -- nvmf/common.sh@155 -- # true 00:21:09.507 00:29:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:09.507 00:29:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:09.507 Cannot find device "nvmf_tgt_br" 00:21:09.507 00:29:56 -- nvmf/common.sh@157 -- # true 00:21:09.507 00:29:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:09.507 Cannot find device "nvmf_tgt_br2" 00:21:09.507 00:29:56 -- nvmf/common.sh@158 -- # true 00:21:09.507 00:29:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:09.507 00:29:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:09.507 00:29:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:09.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:09.507 00:29:56 -- nvmf/common.sh@161 -- # true 00:21:09.507 00:29:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:09.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:09.507 00:29:56 -- nvmf/common.sh@162 -- # true 00:21:09.507 00:29:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:09.507 00:29:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:09.507 00:29:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:09.507 00:29:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:09.507 00:29:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:09.507 00:29:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:09.507 00:29:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:09.507 00:29:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:09.507 00:29:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:09.507 00:29:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:09.507 00:29:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:09.507 00:29:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:09.507 00:29:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:09.507 00:29:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:09.507 00:29:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:09.507 00:29:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:09.507 00:29:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:09.507 00:29:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:09.507 00:29:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:09.766 00:29:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:09.766 00:29:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:09.766 00:29:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:09.766 00:29:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:09.766 00:29:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:09.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:21:09.766 00:21:09.766 --- 10.0.0.2 ping statistics --- 00:21:09.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.766 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:21:09.766 00:29:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:09.766 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:09.766 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:21:09.766 00:21:09.766 --- 10.0.0.3 ping statistics --- 00:21:09.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.766 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:09.766 00:29:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:09.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:21:09.766 00:21:09.766 --- 10.0.0.1 ping statistics --- 00:21:09.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.766 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:21:09.766 00:29:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.766 00:29:56 -- nvmf/common.sh@421 -- # return 0 00:21:09.766 00:29:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:09.766 00:29:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.766 00:29:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:09.766 00:29:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:09.766 00:29:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.766 00:29:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:09.766 00:29:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:09.766 00:29:56 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:09.766 00:29:56 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:09.766 00:29:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:09.766 00:29:56 -- common/autotest_common.sh@10 -- # set +x 00:21:09.766 00:29:56 -- host/fio.sh@24 -- # nvmfpid=94152 00:21:09.766 00:29:56 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.766 00:29:56 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:09.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.766 00:29:56 -- host/fio.sh@28 -- # waitforlisten 94152 00:21:09.766 00:29:56 -- common/autotest_common.sh@819 -- # '[' -z 94152 ']' 00:21:09.766 00:29:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.766 00:29:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:09.766 00:29:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.766 00:29:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:09.766 00:29:56 -- common/autotest_common.sh@10 -- # set +x 00:21:09.766 [2024-07-13 00:29:56.879860] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:09.766 [2024-07-13 00:29:56.879961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.025 [2024-07-13 00:29:57.017043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.025 [2024-07-13 00:29:57.106504] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:10.025 [2024-07-13 00:29:57.106722] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.025 [2024-07-13 00:29:57.106752] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.025 [2024-07-13 00:29:57.106763] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.025 [2024-07-13 00:29:57.106894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.025 [2024-07-13 00:29:57.107497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.025 [2024-07-13 00:29:57.107576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.025 [2024-07-13 00:29:57.107584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.593 00:29:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:10.593 00:29:57 -- common/autotest_common.sh@852 -- # return 0 00:21:10.593 00:29:57 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:10.852 [2024-07-13 00:29:58.005939] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.852 00:29:58 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:10.852 00:29:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:10.852 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:21:11.112 00:29:58 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:11.399 Malloc1 00:21:11.399 00:29:58 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.658 00:29:58 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:11.658 00:29:58 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.916 [2024-07-13 00:29:59.101684] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.916 00:29:59 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:12.174 00:29:59 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:12.174 00:29:59 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:12.174 00:29:59 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:12.174 00:29:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:12.174 00:29:59 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:12.174 00:29:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:12.174 00:29:59 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:12.174 00:29:59 -- common/autotest_common.sh@1320 -- # shift 00:21:12.174 00:29:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:12.174 00:29:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.174 00:29:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:12.174 00:29:59 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:12.174 00:29:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:12.174 00:29:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:12.174 00:29:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:12.174 00:29:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.174 00:29:59 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:12.174 00:29:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:12.174 00:29:59 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:12.174 00:29:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:12.174 00:29:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:12.174 00:29:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:12.174 00:29:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:12.433 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:12.433 fio-3.35 00:21:12.433 Starting 1 thread 00:21:15.016 00:21:15.016 test: (groupid=0, jobs=1): err= 0: pid=94282: Sat Jul 13 00:30:01 2024 00:21:15.016 read: IOPS=9046, BW=35.3MiB/s (37.1MB/s)(70.9MiB/2006msec) 00:21:15.016 slat (nsec): min=1888, max=416541, avg=2513.78, stdev=4174.83 00:21:15.016 clat (usec): min=3492, max=13425, avg=7517.95, stdev=708.55 00:21:15.016 lat (usec): min=3540, max=13427, avg=7520.47, stdev=708.42 00:21:15.016 clat percentiles (usec): 00:21:15.016 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6718], 20.00th=[ 6980], 00:21:15.016 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7635], 00:21:15.016 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8586], 00:21:15.016 | 99.00th=[ 9241], 99.50th=[ 9896], 99.90th=[11863], 99.95th=[12518], 00:21:15.016 | 99.99th=[13304] 00:21:15.016 bw ( KiB/s): min=35360, max=37712, per=99.86%, avg=36138.00, stdev=1064.56, samples=4 00:21:15.016 iops : min= 8840, max= 9428, avg=9034.50, stdev=266.14, samples=4 00:21:15.016 write: IOPS=9061, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2006msec); 0 zone resets 00:21:15.016 slat (nsec): min=1949, max=328075, avg=2630.51, stdev=3200.61 00:21:15.016 clat (usec): min=2708, max=12275, avg=6565.61, stdev=593.06 00:21:15.016 lat (usec): min=2723, max=12277, avg=6568.24, stdev=593.00 00:21:15.016 clat percentiles (usec): 00:21:15.016 | 1.00th=[ 5276], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6128], 00:21:15.016 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6718], 00:21:15.016 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7439], 00:21:15.016 | 99.00th=[ 7898], 99.50th=[ 8586], 99.90th=[10683], 99.95th=[11338], 00:21:15.016 | 99.99th=[12256] 00:21:15.016 bw ( KiB/s): min=35664, max=36992, per=99.93%, avg=36223.50, stdev=654.05, samples=4 00:21:15.016 iops : min= 8916, max= 9248, avg=9055.75, stdev=163.43, samples=4 00:21:15.016 lat (msec) : 4=0.07%, 10=99.55%, 20=0.38% 00:21:15.016 cpu : usr=63.49%, sys=27.08%, ctx=6, majf=0, minf=5 00:21:15.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:15.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:15.016 issued rwts: total=18148,18178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:15.016 00:21:15.016 Run status group 0 (all jobs): 00:21:15.016 READ: bw=35.3MiB/s (37.1MB/s), 35.3MiB/s-35.3MiB/s (37.1MB/s-37.1MB/s), io=70.9MiB (74.3MB), run=2006-2006msec 00:21:15.016 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.5MB), run=2006-2006msec 00:21:15.016 00:30:01 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:15.016 00:30:01 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:15.016 00:30:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:15.016 00:30:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:15.016 00:30:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:15.016 00:30:01 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:15.016 00:30:01 -- common/autotest_common.sh@1320 -- # shift 00:21:15.016 00:30:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:15.016 00:30:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.016 00:30:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:15.016 00:30:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:15.016 00:30:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:15.016 00:30:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:15.016 00:30:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:15.016 00:30:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.016 00:30:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:15.016 00:30:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:15.016 00:30:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:15.016 00:30:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:15.016 00:30:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:15.016 00:30:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:15.016 00:30:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:15.016 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:15.016 fio-3.35 00:21:15.016 Starting 1 thread 00:21:17.582 00:21:17.582 test: (groupid=0, jobs=1): err= 0: pid=94326: Sat Jul 13 00:30:04 2024 00:21:17.582 read: IOPS=8098, BW=127MiB/s (133MB/s)(254MiB/2008msec) 00:21:17.582 slat (usec): min=2, max=154, avg= 3.92, stdev= 3.17 00:21:17.582 clat (usec): min=2499, max=20037, avg=9347.73, stdev=2305.58 00:21:17.582 lat (usec): min=2503, max=20041, avg=9351.65, stdev=2305.71 00:21:17.582 clat percentiles (usec): 00:21:17.582 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7373], 00:21:17.582 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:21:17.582 | 70.00th=[10552], 80.00th=[11207], 90.00th=[12125], 95.00th=[13304], 00:21:17.582 | 99.00th=[15664], 99.50th=[16712], 99.90th=[18482], 99.95th=[18744], 00:21:17.582 | 99.99th=[20055] 00:21:17.582 bw ( KiB/s): min=58848, max=78944, per=51.46%, avg=66686.25, stdev=8847.92, samples=4 00:21:17.582 iops : min= 3678, max= 4934, avg=4167.75, stdev=552.99, samples=4 00:21:17.582 write: IOPS=4741, BW=74.1MiB/s (77.7MB/s)(136MiB/1834msec); 0 zone resets 00:21:17.582 slat (usec): min=32, max=369, avg=38.84, stdev=10.48 00:21:17.582 clat (usec): min=4213, max=18104, avg=11260.41, stdev=1920.67 00:21:17.582 lat (usec): min=4247, max=18137, avg=11299.25, stdev=1922.38 00:21:17.582 clat percentiles (usec): 00:21:17.582 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9503], 00:21:17.582 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:21:17.582 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13829], 95.00th=[14615], 00:21:17.582 | 99.00th=[16188], 99.50th=[16712], 99.90th=[17957], 99.95th=[17957], 00:21:17.582 | 99.99th=[18220] 00:21:17.582 bw ( KiB/s): min=62016, max=81184, per=91.33%, avg=69277.00, stdev=8534.27, samples=4 00:21:17.582 iops : min= 3876, max= 5074, avg=4329.75, stdev=533.39, samples=4 00:21:17.582 lat (msec) : 4=0.09%, 10=49.90%, 20=50.00%, 50=0.01% 00:21:17.582 cpu : usr=70.85%, sys=18.58%, ctx=5, majf=0, minf=1 00:21:17.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:17.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:17.582 issued rwts: total=16262,8695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:17.582 00:21:17.582 Run status group 0 (all jobs): 00:21:17.582 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=254MiB (266MB), run=2008-2008msec 00:21:17.582 WRITE: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=136MiB (142MB), run=1834-1834msec 00:21:17.582 00:30:04 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:17.582 00:30:04 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:17.582 00:30:04 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:17.582 00:30:04 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:17.582 00:30:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:17.582 00:30:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:21:17.582 00:30:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:17.582 00:30:04 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:17.582 00:30:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:17.582 00:30:04 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:21:17.582 00:30:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:17.582 00:30:04 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:17.841 Nvme0n1 00:21:17.841 00:30:04 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:18.099 00:30:05 -- host/fio.sh@53 -- # ls_guid=56637094-6787-4181-bc31-35a47b7b0014 00:21:18.099 00:30:05 -- host/fio.sh@54 -- # get_lvs_free_mb 56637094-6787-4181-bc31-35a47b7b0014 00:21:18.099 00:30:05 -- common/autotest_common.sh@1343 -- # local lvs_uuid=56637094-6787-4181-bc31-35a47b7b0014 00:21:18.099 00:30:05 -- common/autotest_common.sh@1344 -- # local lvs_info 00:21:18.099 00:30:05 -- common/autotest_common.sh@1345 -- # local fc 00:21:18.099 00:30:05 -- common/autotest_common.sh@1346 -- # local cs 00:21:18.099 00:30:05 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:18.357 00:30:05 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:21:18.357 { 00:21:18.357 "base_bdev": "Nvme0n1", 00:21:18.357 "block_size": 4096, 00:21:18.357 "cluster_size": 1073741824, 00:21:18.357 "free_clusters": 4, 00:21:18.357 "name": "lvs_0", 00:21:18.357 "total_data_clusters": 4, 00:21:18.357 "uuid": "56637094-6787-4181-bc31-35a47b7b0014" 00:21:18.357 } 00:21:18.357 ]' 00:21:18.357 00:30:05 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="56637094-6787-4181-bc31-35a47b7b0014") .free_clusters' 00:21:18.357 00:30:05 -- common/autotest_common.sh@1348 -- # fc=4 00:21:18.357 00:30:05 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="56637094-6787-4181-bc31-35a47b7b0014") .cluster_size' 00:21:18.357 4096 00:21:18.357 00:30:05 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:21:18.357 00:30:05 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:21:18.357 00:30:05 -- common/autotest_common.sh@1353 -- # echo 4096 00:21:18.357 00:30:05 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:18.615 48fe32d1-4163-47ad-b3cf-9771ae4be8dd 00:21:18.615 00:30:05 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:18.872 00:30:06 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:19.131 00:30:06 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:19.388 00:30:06 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:19.388 00:30:06 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:19.388 00:30:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:19.388 00:30:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:19.388 00:30:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:19.388 00:30:06 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.388 00:30:06 -- common/autotest_common.sh@1320 -- # shift 00:21:19.388 00:30:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:19.388 00:30:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.388 00:30:06 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.388 00:30:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:19.388 00:30:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:19.388 00:30:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:19.388 00:30:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:19.388 00:30:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.388 00:30:06 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.388 00:30:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:19.388 00:30:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:19.388 00:30:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:19.388 00:30:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:19.388 00:30:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:19.388 00:30:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:19.646 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:19.646 fio-3.35 00:21:19.646 Starting 1 thread 00:21:22.177 00:21:22.177 test: (groupid=0, jobs=1): err= 0: pid=94482: Sat Jul 13 00:30:08 2024 00:21:22.177 read: IOPS=7136, BW=27.9MiB/s (29.2MB/s)(55.9MiB/2007msec) 00:21:22.177 slat (nsec): min=1916, max=418771, avg=2831.09, stdev=4582.90 00:21:22.177 clat (usec): min=4115, max=17306, avg=9617.77, stdev=943.22 00:21:22.177 lat (usec): min=4125, max=17308, avg=9620.60, stdev=943.02 00:21:22.177 clat percentiles (usec): 00:21:22.177 | 1.00th=[ 7570], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8848], 00:21:22.177 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:21:22.177 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:21:22.177 | 99.00th=[11863], 99.50th=[12256], 99.90th=[15270], 99.95th=[15664], 00:21:22.177 | 99.99th=[16712] 00:21:22.177 bw ( KiB/s): min=27568, max=29024, per=99.78%, avg=28484.00, stdev=656.93, samples=4 00:21:22.177 iops : min= 6892, max= 7256, avg=7121.00, stdev=164.23, samples=4 00:21:22.177 write: IOPS=7123, BW=27.8MiB/s (29.2MB/s)(55.8MiB/2007msec); 0 zone resets 00:21:22.177 slat (nsec): min=1998, max=268030, avg=2923.64, stdev=3126.49 00:21:22.177 clat (usec): min=2592, max=15061, avg=8248.08, stdev=796.86 00:21:22.177 lat (usec): min=2607, max=15063, avg=8251.01, stdev=796.78 00:21:22.177 clat percentiles (usec): 00:21:22.177 | 1.00th=[ 6456], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7635], 00:21:22.177 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8455], 00:21:22.177 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[ 9503], 00:21:22.177 | 99.00th=[10159], 99.50th=[10290], 99.90th=[12780], 99.95th=[13304], 00:21:22.177 | 99.99th=[15008] 00:21:22.177 bw ( KiB/s): min=28416, max=28688, per=100.00%, avg=28502.00, stdev=125.54, samples=4 00:21:22.177 iops : min= 7104, max= 7172, avg=7125.50, stdev=31.38, samples=4 00:21:22.177 lat (msec) : 4=0.04%, 10=83.03%, 20=16.93% 00:21:22.177 cpu : usr=66.80%, sys=24.68%, ctx=7, majf=0, minf=5 00:21:22.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:22.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.177 issued rwts: total=14323,14297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.177 00:21:22.177 Run status group 0 (all jobs): 00:21:22.177 READ: bw=27.9MiB/s (29.2MB/s), 27.9MiB/s-27.9MiB/s (29.2MB/s-29.2MB/s), io=55.9MiB (58.7MB), run=2007-2007msec 00:21:22.177 WRITE: bw=27.8MiB/s (29.2MB/s), 27.8MiB/s-27.8MiB/s (29.2MB/s-29.2MB/s), io=55.8MiB (58.6MB), run=2007-2007msec 00:21:22.177 00:30:08 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:22.177 00:30:09 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:22.434 00:30:09 -- host/fio.sh@64 -- # ls_nested_guid=7be7b79c-9912-4f92-8c1b-43cda1d463b9 00:21:22.434 00:30:09 -- host/fio.sh@65 -- # get_lvs_free_mb 7be7b79c-9912-4f92-8c1b-43cda1d463b9 00:21:22.434 00:30:09 -- common/autotest_common.sh@1343 -- # local lvs_uuid=7be7b79c-9912-4f92-8c1b-43cda1d463b9 00:21:22.434 00:30:09 -- common/autotest_common.sh@1344 -- # local lvs_info 00:21:22.434 00:30:09 -- common/autotest_common.sh@1345 -- # local fc 00:21:22.434 00:30:09 -- common/autotest_common.sh@1346 -- # local cs 00:21:22.434 00:30:09 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:22.691 00:30:09 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:21:22.691 { 00:21:22.691 "base_bdev": "Nvme0n1", 00:21:22.691 "block_size": 4096, 00:21:22.691 "cluster_size": 1073741824, 00:21:22.691 "free_clusters": 0, 00:21:22.691 "name": "lvs_0", 00:21:22.691 "total_data_clusters": 4, 00:21:22.691 "uuid": "56637094-6787-4181-bc31-35a47b7b0014" 00:21:22.691 }, 00:21:22.691 { 00:21:22.691 "base_bdev": "48fe32d1-4163-47ad-b3cf-9771ae4be8dd", 00:21:22.691 "block_size": 4096, 00:21:22.691 "cluster_size": 4194304, 00:21:22.691 "free_clusters": 1022, 00:21:22.691 "name": "lvs_n_0", 00:21:22.691 "total_data_clusters": 1022, 00:21:22.691 "uuid": "7be7b79c-9912-4f92-8c1b-43cda1d463b9" 00:21:22.691 } 00:21:22.691 ]' 00:21:22.691 00:30:09 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="7be7b79c-9912-4f92-8c1b-43cda1d463b9") .free_clusters' 00:21:22.691 00:30:09 -- common/autotest_common.sh@1348 -- # fc=1022 00:21:22.691 00:30:09 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="7be7b79c-9912-4f92-8c1b-43cda1d463b9") .cluster_size' 00:21:22.691 00:30:09 -- common/autotest_common.sh@1349 -- # cs=4194304 00:21:22.691 4088 00:21:22.691 00:30:09 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:21:22.691 00:30:09 -- common/autotest_common.sh@1353 -- # echo 4088 00:21:22.691 00:30:09 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:22.948 037c932e-1590-45e8-9cef-ee10b73dc309 00:21:22.948 00:30:10 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:23.206 00:30:10 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:23.463 00:30:10 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:23.721 00:30:10 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:23.721 00:30:10 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:23.721 00:30:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:23.721 00:30:10 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:23.721 00:30:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:23.721 00:30:10 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:23.721 00:30:10 -- common/autotest_common.sh@1320 -- # shift 00:21:23.721 00:30:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:23.721 00:30:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:23.721 00:30:10 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:23.721 00:30:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:23.721 00:30:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:23.721 00:30:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:23.721 00:30:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:23.721 00:30:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:23.721 00:30:10 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:23.721 00:30:10 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:23.721 00:30:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:23.721 00:30:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:23.721 00:30:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:23.721 00:30:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:23.721 00:30:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:23.721 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:23.721 fio-3.35 00:21:23.721 Starting 1 thread 00:21:26.249 00:21:26.249 test: (groupid=0, jobs=1): err= 0: pid=94598: Sat Jul 13 00:30:13 2024 00:21:26.249 read: IOPS=5017, BW=19.6MiB/s (20.6MB/s)(40.2MiB/2053msec) 00:21:26.249 slat (nsec): min=1877, max=286873, avg=2763.44, stdev=4234.73 00:21:26.249 clat (usec): min=4950, max=63341, avg=13535.39, stdev=3553.57 00:21:26.249 lat (usec): min=4959, max=63343, avg=13538.15, stdev=3553.44 00:21:26.249 clat percentiles (usec): 00:21:26.249 | 1.00th=[10683], 5.00th=[11469], 10.00th=[11863], 20.00th=[12387], 00:21:26.249 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13304], 60.00th=[13566], 00:21:26.249 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14877], 95.00th=[15401], 00:21:26.249 | 99.00th=[16450], 99.50th=[54264], 99.90th=[61604], 99.95th=[63177], 00:21:26.249 | 99.99th=[63177] 00:21:26.249 bw ( KiB/s): min=19864, max=20760, per=100.00%, avg=20492.00, stdev=426.81, samples=4 00:21:26.249 iops : min= 4966, max= 5190, avg=5123.00, stdev=106.70, samples=4 00:21:26.249 write: IOPS=5012, BW=19.6MiB/s (20.5MB/s)(40.2MiB/2053msec); 0 zone resets 00:21:26.249 slat (nsec): min=1996, max=224270, avg=2886.33, stdev=3137.19 00:21:26.249 clat (usec): min=2377, max=61635, avg=11916.43, stdev=3948.97 00:21:26.249 lat (usec): min=2423, max=61637, avg=11919.31, stdev=3948.93 00:21:26.249 clat percentiles (usec): 00:21:26.249 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:21:26.249 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:21:26.249 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12911], 95.00th=[13304], 00:21:26.249 | 99.00th=[14353], 99.50th=[54789], 99.90th=[61080], 99.95th=[61604], 00:21:26.249 | 99.99th=[61604] 00:21:26.249 bw ( KiB/s): min=20048, max=20680, per=100.00%, avg=20436.00, stdev=281.29, samples=4 00:21:26.249 iops : min= 5012, max= 5170, avg=5109.00, stdev=70.32, samples=4 00:21:26.249 lat (msec) : 4=0.03%, 10=2.54%, 20=96.80%, 100=0.62% 00:21:26.249 cpu : usr=72.51%, sys=21.93%, ctx=6, majf=0, minf=5 00:21:26.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:26.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:26.249 issued rwts: total=10301,10290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:26.249 00:21:26.249 Run status group 0 (all jobs): 00:21:26.249 READ: bw=19.6MiB/s (20.6MB/s), 19.6MiB/s-19.6MiB/s (20.6MB/s-20.6MB/s), io=40.2MiB (42.2MB), run=2053-2053msec 00:21:26.249 WRITE: bw=19.6MiB/s (20.5MB/s), 19.6MiB/s-19.6MiB/s (20.5MB/s-20.5MB/s), io=40.2MiB (42.1MB), run=2053-2053msec 00:21:26.249 00:30:13 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:26.508 00:30:13 -- host/fio.sh@74 -- # sync 00:21:26.508 00:30:13 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:26.769 00:30:13 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:27.027 00:30:14 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:27.284 00:30:14 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:27.542 00:30:14 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:28.476 00:30:15 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:28.476 00:30:15 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:28.476 00:30:15 -- host/fio.sh@86 -- # nvmftestfini 00:21:28.476 00:30:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:28.476 00:30:15 -- nvmf/common.sh@116 -- # sync 00:21:28.476 00:30:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:28.476 00:30:15 -- nvmf/common.sh@119 -- # set +e 00:21:28.476 00:30:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:28.476 00:30:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:28.476 rmmod nvme_tcp 00:21:28.476 rmmod nvme_fabrics 00:21:28.476 rmmod nvme_keyring 00:21:28.476 00:30:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:28.476 00:30:15 -- nvmf/common.sh@123 -- # set -e 00:21:28.476 00:30:15 -- nvmf/common.sh@124 -- # return 0 00:21:28.476 00:30:15 -- nvmf/common.sh@477 -- # '[' -n 94152 ']' 00:21:28.476 00:30:15 -- nvmf/common.sh@478 -- # killprocess 94152 00:21:28.476 00:30:15 -- common/autotest_common.sh@926 -- # '[' -z 94152 ']' 00:21:28.476 00:30:15 -- common/autotest_common.sh@930 -- # kill -0 94152 00:21:28.476 00:30:15 -- common/autotest_common.sh@931 -- # uname 00:21:28.476 00:30:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:28.476 00:30:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94152 00:21:28.476 killing process with pid 94152 00:21:28.476 00:30:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:28.476 00:30:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:28.476 00:30:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94152' 00:21:28.476 00:30:15 -- common/autotest_common.sh@945 -- # kill 94152 00:21:28.476 00:30:15 -- common/autotest_common.sh@950 -- # wait 94152 00:21:28.734 00:30:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:28.735 00:30:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:28.735 00:30:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:28.735 00:30:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.735 00:30:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:28.735 00:30:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.735 00:30:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.735 00:30:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.735 00:30:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:28.735 00:21:28.735 real 0m19.455s 00:21:28.735 user 1m24.992s 00:21:28.735 sys 0m4.614s 00:21:28.735 00:30:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.735 00:30:15 -- common/autotest_common.sh@10 -- # set +x 00:21:28.735 ************************************ 00:21:28.735 END TEST nvmf_fio_host 00:21:28.735 ************************************ 00:21:28.735 00:30:15 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:28.735 00:30:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:28.735 00:30:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:28.735 00:30:15 -- common/autotest_common.sh@10 -- # set +x 00:21:28.735 ************************************ 00:21:28.735 START TEST nvmf_failover 00:21:28.735 ************************************ 00:21:28.735 00:30:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:28.735 * Looking for test storage... 00:21:28.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:28.735 00:30:15 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:28.735 00:30:15 -- nvmf/common.sh@7 -- # uname -s 00:21:28.735 00:30:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.735 00:30:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.735 00:30:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.735 00:30:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.735 00:30:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.735 00:30:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.735 00:30:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.735 00:30:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.735 00:30:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.735 00:30:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.735 00:30:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:21:28.735 00:30:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:21:28.735 00:30:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.735 00:30:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.735 00:30:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:28.735 00:30:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:28.735 00:30:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.735 00:30:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.735 00:30:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.735 00:30:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.735 00:30:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.735 00:30:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.735 00:30:15 -- paths/export.sh@5 -- # export PATH 00:21:28.735 00:30:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.735 00:30:15 -- nvmf/common.sh@46 -- # : 0 00:21:28.735 00:30:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:28.735 00:30:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:28.735 00:30:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:28.735 00:30:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.735 00:30:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.735 00:30:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:28.735 00:30:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:28.735 00:30:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:28.735 00:30:15 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:28.735 00:30:15 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:28.735 00:30:15 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:28.735 00:30:15 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:28.735 00:30:15 -- host/failover.sh@18 -- # nvmftestinit 00:21:28.735 00:30:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:28.735 00:30:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.735 00:30:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:28.735 00:30:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:28.735 00:30:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:28.735 00:30:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.735 00:30:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.735 00:30:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.735 00:30:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:28.735 00:30:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:28.735 00:30:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:28.735 00:30:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:28.735 00:30:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:28.735 00:30:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:28.735 00:30:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.735 00:30:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.735 00:30:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:28.994 00:30:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:28.994 00:30:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:28.994 00:30:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:28.994 00:30:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:28.994 00:30:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.994 00:30:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:28.994 00:30:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:28.994 00:30:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:28.994 00:30:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:28.994 00:30:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:28.994 00:30:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:28.994 Cannot find device "nvmf_tgt_br" 00:21:28.994 00:30:15 -- nvmf/common.sh@154 -- # true 00:21:28.994 00:30:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:28.994 Cannot find device "nvmf_tgt_br2" 00:21:28.994 00:30:16 -- nvmf/common.sh@155 -- # true 00:21:28.994 00:30:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:28.994 00:30:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:28.994 Cannot find device "nvmf_tgt_br" 00:21:28.994 00:30:16 -- nvmf/common.sh@157 -- # true 00:21:28.994 00:30:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:28.994 Cannot find device "nvmf_tgt_br2" 00:21:28.994 00:30:16 -- nvmf/common.sh@158 -- # true 00:21:28.994 00:30:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:28.994 00:30:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:28.994 00:30:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:28.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.994 00:30:16 -- nvmf/common.sh@161 -- # true 00:21:28.994 00:30:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:28.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.994 00:30:16 -- nvmf/common.sh@162 -- # true 00:21:28.994 00:30:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:28.994 00:30:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:28.994 00:30:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:28.994 00:30:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:28.994 00:30:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:28.994 00:30:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:28.994 00:30:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:28.994 00:30:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:28.994 00:30:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:28.994 00:30:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:28.994 00:30:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:28.994 00:30:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:28.994 00:30:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:28.994 00:30:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:28.994 00:30:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:28.994 00:30:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:28.994 00:30:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:28.994 00:30:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:28.994 00:30:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:29.253 00:30:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:29.253 00:30:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:29.253 00:30:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:29.253 00:30:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:29.253 00:30:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:29.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:21:29.253 00:21:29.253 --- 10.0.0.2 ping statistics --- 00:21:29.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.253 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:21:29.253 00:30:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:29.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:29.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:21:29.253 00:21:29.253 --- 10.0.0.3 ping statistics --- 00:21:29.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.253 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:29.253 00:30:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:29.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:29.253 00:21:29.253 --- 10.0.0.1 ping statistics --- 00:21:29.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.253 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:29.253 00:30:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.253 00:30:16 -- nvmf/common.sh@421 -- # return 0 00:21:29.253 00:30:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:29.253 00:30:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.253 00:30:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:29.253 00:30:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:29.253 00:30:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.253 00:30:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:29.253 00:30:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:29.253 00:30:16 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:29.253 00:30:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:29.253 00:30:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:29.253 00:30:16 -- common/autotest_common.sh@10 -- # set +x 00:21:29.253 00:30:16 -- nvmf/common.sh@469 -- # nvmfpid=94874 00:21:29.253 00:30:16 -- nvmf/common.sh@470 -- # waitforlisten 94874 00:21:29.253 00:30:16 -- common/autotest_common.sh@819 -- # '[' -z 94874 ']' 00:21:29.253 00:30:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.253 00:30:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:29.253 00:30:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:29.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.253 00:30:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.253 00:30:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:29.253 00:30:16 -- common/autotest_common.sh@10 -- # set +x 00:21:29.253 [2024-07-13 00:30:16.350757] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:29.253 [2024-07-13 00:30:16.350873] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.511 [2024-07-13 00:30:16.497455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:29.511 [2024-07-13 00:30:16.606778] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:29.511 [2024-07-13 00:30:16.606919] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.511 [2024-07-13 00:30:16.606932] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.511 [2024-07-13 00:30:16.606940] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.511 [2024-07-13 00:30:16.607076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.511 [2024-07-13 00:30:16.607513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.511 [2024-07-13 00:30:16.607502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.076 00:30:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:30.076 00:30:17 -- common/autotest_common.sh@852 -- # return 0 00:21:30.076 00:30:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:30.076 00:30:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:30.076 00:30:17 -- common/autotest_common.sh@10 -- # set +x 00:21:30.335 00:30:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.335 00:30:17 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:30.593 [2024-07-13 00:30:17.576459] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.593 00:30:17 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:30.852 Malloc0 00:21:30.852 00:30:17 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:31.110 00:30:18 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:31.368 00:30:18 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.626 [2024-07-13 00:30:18.620406] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.626 00:30:18 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.626 [2024-07-13 00:30:18.844661] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:31.884 00:30:18 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:31.884 [2024-07-13 00:30:19.077146] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:31.884 00:30:19 -- host/failover.sh@31 -- # bdevperf_pid=94993 00:21:31.884 00:30:19 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:31.884 00:30:19 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.884 00:30:19 -- host/failover.sh@34 -- # waitforlisten 94993 /var/tmp/bdevperf.sock 00:21:31.884 00:30:19 -- common/autotest_common.sh@819 -- # '[' -z 94993 ']' 00:21:31.884 00:30:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.884 00:30:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:31.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.884 00:30:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.884 00:30:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:31.884 00:30:19 -- common/autotest_common.sh@10 -- # set +x 00:21:33.270 00:30:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:33.270 00:30:20 -- common/autotest_common.sh@852 -- # return 0 00:21:33.270 00:30:20 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:33.270 NVMe0n1 00:21:33.270 00:30:20 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:33.839 00:21:33.839 00:30:20 -- host/failover.sh@39 -- # run_test_pid=95041 00:21:33.839 00:30:20 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:33.839 00:30:20 -- host/failover.sh@41 -- # sleep 1 00:21:34.776 00:30:21 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.035 [2024-07-13 00:30:22.027241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.035 [2024-07-13 00:30:22.027646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 [2024-07-13 00:30:22.027867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458800 is same with the state(5) to be set 00:21:35.036 00:30:22 -- host/failover.sh@45 -- # sleep 3 00:21:38.317 00:30:25 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.317 00:21:38.317 00:30:25 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:38.576 [2024-07-13 00:30:25.622875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.622936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.622958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.622966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.622989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 [2024-07-13 00:30:25.623345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1459ef0 is same with the state(5) to be set 00:21:38.576 00:30:25 -- host/failover.sh@50 -- # sleep 3 00:21:41.862 00:30:28 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.862 [2024-07-13 00:30:28.883056] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.862 00:30:28 -- host/failover.sh@55 -- # sleep 1 00:21:42.800 00:30:29 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:43.059 [2024-07-13 00:30:30.165645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.059 [2024-07-13 00:30:30.165852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.165993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 [2024-07-13 00:30:30.166317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a5d0 is same with the state(5) to be set 00:21:43.060 00:30:30 -- host/failover.sh@59 -- # wait 95041 00:21:49.628 0 00:21:49.628 00:30:35 -- host/failover.sh@61 -- # killprocess 94993 00:21:49.628 00:30:35 -- common/autotest_common.sh@926 -- # '[' -z 94993 ']' 00:21:49.628 00:30:35 -- common/autotest_common.sh@930 -- # kill -0 94993 00:21:49.628 00:30:35 -- common/autotest_common.sh@931 -- # uname 00:21:49.628 00:30:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:49.628 00:30:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94993 00:21:49.628 killing process with pid 94993 00:21:49.628 00:30:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:49.628 00:30:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:49.628 00:30:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94993' 00:21:49.628 00:30:35 -- common/autotest_common.sh@945 -- # kill 94993 00:21:49.628 00:30:35 -- common/autotest_common.sh@950 -- # wait 94993 00:21:49.628 00:30:36 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:49.628 [2024-07-13 00:30:19.153562] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:49.628 [2024-07-13 00:30:19.153711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94993 ] 00:21:49.628 [2024-07-13 00:30:19.288216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.628 [2024-07-13 00:30:19.387240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.628 Running I/O for 15 seconds... 00:21:49.628 [2024-07-13 00:30:22.028246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.628 [2024-07-13 00:30:22.028319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.628 [2024-07-13 00:30:22.028353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.628 [2024-07-13 00:30:22.028372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.628 [2024-07-13 00:30:22.028393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.628 [2024-07-13 00:30:22.028409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.628 [2024-07-13 00:30:22.028428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.628 [2024-07-13 00:30:22.028444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.628 [2024-07-13 00:30:22.028463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.628 [2024-07-13 00:30:22.028479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.628 [2024-07-13 00:30:22.028498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.628 [2024-07-13 00:30:22.028514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.628 [2024-07-13 00:30:22.028533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.628 [2024-07-13 00:30:22.028549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.628 [2024-07-13 00:30:22.028568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.628 [2024-07-13 00:30:22.028585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.628 [2024-07-13 00:30:22.028604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.628 [2024-07-13 00:30:22.028695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.628 [2024-07-13 00:30:22.028729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.628 [2024-07-13 00:30:22.028748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.028767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.028784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.028836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.028855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.028873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.028890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.028908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.028925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.028952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.028974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.028993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.029969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.029984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.629 [2024-07-13 00:30:22.030389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.629 [2024-07-13 00:30:22.030404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.630 [2024-07-13 00:30:22.030622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.630 [2024-07-13 00:30:22.030774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.630 [2024-07-13 00:30:22.030847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.630 [2024-07-13 00:30:22.030913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.030979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.030996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.630 [2024-07-13 00:30:22.031060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.630 [2024-07-13 00:30:22.031421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.630 [2024-07-13 00:30:22.031484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.630 [2024-07-13 00:30:22.031516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.630 [2024-07-13 00:30:22.031546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.630 [2024-07-13 00:30:22.031563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.630 [2024-07-13 00:30:22.031578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.631 [2024-07-13 00:30:22.031619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.031666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.031698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.631 [2024-07-13 00:30:22.031730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.631 [2024-07-13 00:30:22.031761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.631 [2024-07-13 00:30:22.031792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.631 [2024-07-13 00:30:22.031823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.031855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.031893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.031924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.031957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.031973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.031988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.631 [2024-07-13 00:30:22.032020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.631 [2024-07-13 00:30:22.032093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.631 [2024-07-13 00:30:22.032125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.032953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.032998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.033026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.033049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.033087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.631 [2024-07-13 00:30:22.033106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.631 [2024-07-13 00:30:22.033131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c9f70 is same with the state(5) to be set 00:21:49.631 [2024-07-13 00:30:22.033152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.631 [2024-07-13 00:30:22.033165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.631 [2024-07-13 00:30:22.033177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:136 len:8 PRP1 0x0 PRP2 0x0 00:21:49.632 [2024-07-13 00:30:22.033192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:22.033252] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19c9f70 was disconnected and freed. reset controller. 00:21:49.632 [2024-07-13 00:30:22.033274] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:49.632 [2024-07-13 00:30:22.033341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.632 [2024-07-13 00:30:22.033365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:22.033381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.632 [2024-07-13 00:30:22.033396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:22.033412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.632 [2024-07-13 00:30:22.033426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:22.033442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.632 [2024-07-13 00:30:22.033457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:22.033471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.632 [2024-07-13 00:30:22.035949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.632 [2024-07-13 00:30:22.036008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19aaf20 (9): Bad file descriptor 00:21:49.632 [2024-07-13 00:30:22.068302] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.632 [2024-07-13 00:30:25.623468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.623535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.623566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.623608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.623646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.623665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.623684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.623718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.623743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.623761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.623779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.623796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.623815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.623832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.623851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.623867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.623886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.623903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.623921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.623938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.623958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.623975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.623994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.624011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.624065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.624099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.624144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.624180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.624216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.632 [2024-07-13 00:30:25.624252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.632 [2024-07-13 00:30:25.624287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.632 [2024-07-13 00:30:25.624321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.624355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.624390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.624425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.632 [2024-07-13 00:30:25.624459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.632 [2024-07-13 00:30:25.624493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.632 [2024-07-13 00:30:25.624510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.632 [2024-07-13 00:30:25.624542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.624560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.633 [2024-07-13 00:30:25.624576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.624603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.624620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.624696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.624715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.624735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.624752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.624771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.624787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.624805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.624822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.624840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.624859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.624878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.624895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.624914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.624931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.624949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.624997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.633 [2024-07-13 00:30:25.625342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.633 [2024-07-13 00:30:25.625405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.633 [2024-07-13 00:30:25.625438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.633 [2024-07-13 00:30:25.625535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.633 [2024-07-13 00:30:25.625576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.633 [2024-07-13 00:30:25.625674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.633 [2024-07-13 00:30:25.625690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.625707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.625722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.625738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.625754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.625770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.625786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.625803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.625818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.625835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.625850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.625868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.625884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.625901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.625915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.625932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.625947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.625965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.625989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.626025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.626058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.626190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.626286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.626318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.626350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.626457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.626529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.626563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.634 [2024-07-13 00:30:25.626595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.634 [2024-07-13 00:30:25.626793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.634 [2024-07-13 00:30:25.626810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.626826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.626843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.626860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.626887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.626904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.626921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.626937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.626954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.626970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.626995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.635 [2024-07-13 00:30:25.627964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.627981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.627996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.628013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.628028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.628046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.628061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.628078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.628094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.628111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.628126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.628143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.628159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.628176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.635 [2024-07-13 00:30:25.628191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.628214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b7390 is same with the state(5) to be set 00:21:49.635 [2024-07-13 00:30:25.628235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.635 [2024-07-13 00:30:25.628255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.635 [2024-07-13 00:30:25.628274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29392 len:8 PRP1 0x0 PRP2 0x0 00:21:49.635 [2024-07-13 00:30:25.628290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.635 [2024-07-13 00:30:25.628350] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19b7390 was disconnected and freed. reset controller. 00:21:49.635 [2024-07-13 00:30:25.628371] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:49.635 [2024-07-13 00:30:25.628434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.635 [2024-07-13 00:30:25.628457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:25.628475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.636 [2024-07-13 00:30:25.628490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:25.628506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.636 [2024-07-13 00:30:25.628520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:25.628536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.636 [2024-07-13 00:30:25.628550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:25.628566] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.636 [2024-07-13 00:30:25.630930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.636 [2024-07-13 00:30:25.630974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19aaf20 (9): Bad file descriptor 00:21:49.636 [2024-07-13 00:30:25.662354] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.636 [2024-07-13 00:30:30.166424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.166982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.166998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.636 [2024-07-13 00:30:30.167542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.636 [2024-07-13 00:30:30.167574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.636 [2024-07-13 00:30:30.167608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.636 [2024-07-13 00:30:30.167670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.636 [2024-07-13 00:30:30.167780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.636 [2024-07-13 00:30:30.167937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.636 [2024-07-13 00:30:30.167954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.167971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.167988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.168320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.168512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.168544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.168576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.168731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.168766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.168801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.168847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.168882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.168951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.168985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.169016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.169053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.169068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.169085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.169100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.169116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.169132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.169149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.637 [2024-07-13 00:30:30.169163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.169180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.169205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.169281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.169300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.169317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.169332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.169349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.169364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.637 [2024-07-13 00:30:30.169381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.637 [2024-07-13 00:30:30.169396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.169966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.169984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.638 [2024-07-13 00:30:30.170906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.638 [2024-07-13 00:30:30.170939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.638 [2024-07-13 00:30:30.170957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.639 [2024-07-13 00:30:30.170973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.170991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.639 [2024-07-13 00:30:30.171013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.639 [2024-07-13 00:30:30.171053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.639 [2024-07-13 00:30:30.171087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.639 [2024-07-13 00:30:30.171129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.639 [2024-07-13 00:30:30.171164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.639 [2024-07-13 00:30:30.171198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.639 [2024-07-13 00:30:30.171232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.639 [2024-07-13 00:30:30.171273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.639 [2024-07-13 00:30:30.171307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.639 [2024-07-13 00:30:30.171340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.639 [2024-07-13 00:30:30.171374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.639 [2024-07-13 00:30:30.171407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70070 is same with the state(5) to be set 00:21:49.639 [2024-07-13 00:30:30.171445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.639 [2024-07-13 00:30:30.171458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.639 [2024-07-13 00:30:30.171471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31800 len:8 PRP1 0x0 PRP2 0x0 00:21:49.639 [2024-07-13 00:30:30.171486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171548] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b70070 was disconnected and freed. reset controller. 00:21:49.639 [2024-07-13 00:30:30.171570] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:49.639 [2024-07-13 00:30:30.171652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.639 [2024-07-13 00:30:30.171678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.639 [2024-07-13 00:30:30.171730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.639 [2024-07-13 00:30:30.171763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.639 [2024-07-13 00:30:30.171795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.639 [2024-07-13 00:30:30.171811] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.639 [2024-07-13 00:30:30.171853] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19aaf20 (9): Bad file descriptor 00:21:49.639 [2024-07-13 00:30:30.174137] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.639 [2024-07-13 00:30:30.206367] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.639 00:21:49.639 Latency(us) 00:21:49.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.639 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:49.639 Verification LBA range: start 0x0 length 0x4000 00:21:49.639 NVMe0n1 : 15.01 13697.78 53.51 311.15 0.00 9120.14 726.11 15966.95 00:21:49.639 =================================================================================================================== 00:21:49.639 Total : 13697.78 53.51 311.15 0.00 9120.14 726.11 15966.95 00:21:49.639 Received shutdown signal, test time was about 15.000000 seconds 00:21:49.639 00:21:49.639 Latency(us) 00:21:49.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.639 =================================================================================================================== 00:21:49.639 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:49.639 00:30:36 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:49.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.639 00:30:36 -- host/failover.sh@65 -- # count=3 00:21:49.639 00:30:36 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:49.639 00:30:36 -- host/failover.sh@73 -- # bdevperf_pid=95244 00:21:49.639 00:30:36 -- host/failover.sh@75 -- # waitforlisten 95244 /var/tmp/bdevperf.sock 00:21:49.639 00:30:36 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:49.639 00:30:36 -- common/autotest_common.sh@819 -- # '[' -z 95244 ']' 00:21:49.639 00:30:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.639 00:30:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:49.639 00:30:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.639 00:30:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:49.639 00:30:36 -- common/autotest_common.sh@10 -- # set +x 00:21:50.207 00:30:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:50.207 00:30:37 -- common/autotest_common.sh@852 -- # return 0 00:21:50.207 00:30:37 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:50.466 [2024-07-13 00:30:37.444443] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:50.466 00:30:37 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:50.724 [2024-07-13 00:30:37.696847] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:50.724 00:30:37 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.982 NVMe0n1 00:21:50.982 00:30:37 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:51.240 00:21:51.240 00:30:38 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:51.498 00:21:51.498 00:30:38 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:51.498 00:30:38 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:51.756 00:30:38 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:52.015 00:30:39 -- host/failover.sh@87 -- # sleep 3 00:21:55.297 00:30:42 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:55.297 00:30:42 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:55.297 00:30:42 -- host/failover.sh@90 -- # run_test_pid=95385 00:21:55.297 00:30:42 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:55.297 00:30:42 -- host/failover.sh@92 -- # wait 95385 00:21:56.670 0 00:21:56.670 00:30:43 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:56.670 [2024-07-13 00:30:36.221267] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:56.670 [2024-07-13 00:30:36.221415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95244 ] 00:21:56.670 [2024-07-13 00:30:36.356969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.670 [2024-07-13 00:30:36.445457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.670 [2024-07-13 00:30:39.093207] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:56.670 [2024-07-13 00:30:39.093335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.670 [2024-07-13 00:30:39.093363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.670 [2024-07-13 00:30:39.093384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.670 [2024-07-13 00:30:39.093400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.670 [2024-07-13 00:30:39.093415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.670 [2024-07-13 00:30:39.093430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.670 [2024-07-13 00:30:39.093445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.670 [2024-07-13 00:30:39.093460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.670 [2024-07-13 00:30:39.093476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.670 [2024-07-13 00:30:39.093533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:56.670 [2024-07-13 00:30:39.093570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x982f20 (9): Bad file descriptor 00:21:56.670 [2024-07-13 00:30:39.096794] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:56.670 Running I/O for 1 seconds... 00:21:56.670 00:21:56.670 Latency(us) 00:21:56.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.670 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:56.670 Verification LBA range: start 0x0 length 0x4000 00:21:56.670 NVMe0n1 : 1.01 12843.40 50.17 0.00 0.00 9920.04 1280.93 15966.95 00:21:56.670 =================================================================================================================== 00:21:56.670 Total : 12843.40 50.17 0.00 0.00 9920.04 1280.93 15966.95 00:21:56.670 00:30:43 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:56.670 00:30:43 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:56.670 00:30:43 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.929 00:30:43 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:56.929 00:30:43 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:57.188 00:30:44 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:57.188 00:30:44 -- host/failover.sh@101 -- # sleep 3 00:22:00.480 00:30:47 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.480 00:30:47 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:00.480 00:30:47 -- host/failover.sh@108 -- # killprocess 95244 00:22:00.480 00:30:47 -- common/autotest_common.sh@926 -- # '[' -z 95244 ']' 00:22:00.480 00:30:47 -- common/autotest_common.sh@930 -- # kill -0 95244 00:22:00.480 00:30:47 -- common/autotest_common.sh@931 -- # uname 00:22:00.480 00:30:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:00.480 00:30:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95244 00:22:00.480 killing process with pid 95244 00:22:00.480 00:30:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:00.480 00:30:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:00.480 00:30:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95244' 00:22:00.480 00:30:47 -- common/autotest_common.sh@945 -- # kill 95244 00:22:00.480 00:30:47 -- common/autotest_common.sh@950 -- # wait 95244 00:22:00.781 00:30:47 -- host/failover.sh@110 -- # sync 00:22:00.781 00:30:47 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.058 00:30:48 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:01.058 00:30:48 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:01.058 00:30:48 -- host/failover.sh@116 -- # nvmftestfini 00:22:01.058 00:30:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:01.058 00:30:48 -- nvmf/common.sh@116 -- # sync 00:22:01.058 00:30:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:01.058 00:30:48 -- nvmf/common.sh@119 -- # set +e 00:22:01.058 00:30:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:01.058 00:30:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:01.058 rmmod nvme_tcp 00:22:01.058 rmmod nvme_fabrics 00:22:01.058 rmmod nvme_keyring 00:22:01.058 00:30:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:01.058 00:30:48 -- nvmf/common.sh@123 -- # set -e 00:22:01.058 00:30:48 -- nvmf/common.sh@124 -- # return 0 00:22:01.058 00:30:48 -- nvmf/common.sh@477 -- # '[' -n 94874 ']' 00:22:01.058 00:30:48 -- nvmf/common.sh@478 -- # killprocess 94874 00:22:01.058 00:30:48 -- common/autotest_common.sh@926 -- # '[' -z 94874 ']' 00:22:01.058 00:30:48 -- common/autotest_common.sh@930 -- # kill -0 94874 00:22:01.058 00:30:48 -- common/autotest_common.sh@931 -- # uname 00:22:01.058 00:30:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:01.058 00:30:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94874 00:22:01.058 killing process with pid 94874 00:22:01.058 00:30:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:01.058 00:30:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:01.058 00:30:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94874' 00:22:01.058 00:30:48 -- common/autotest_common.sh@945 -- # kill 94874 00:22:01.058 00:30:48 -- common/autotest_common.sh@950 -- # wait 94874 00:22:01.317 00:30:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:01.318 00:30:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:01.318 00:30:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:01.318 00:30:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:01.318 00:30:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:01.318 00:30:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.318 00:30:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.318 00:30:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.577 00:30:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:01.577 00:22:01.577 real 0m32.720s 00:22:01.577 user 2m6.181s 00:22:01.577 sys 0m5.256s 00:22:01.577 00:30:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.577 ************************************ 00:22:01.577 END TEST nvmf_failover 00:22:01.577 ************************************ 00:22:01.577 00:30:48 -- common/autotest_common.sh@10 -- # set +x 00:22:01.577 00:30:48 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:01.577 00:30:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:01.577 00:30:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:01.577 00:30:48 -- common/autotest_common.sh@10 -- # set +x 00:22:01.577 ************************************ 00:22:01.577 START TEST nvmf_discovery 00:22:01.577 ************************************ 00:22:01.577 00:30:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:01.577 * Looking for test storage... 00:22:01.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:01.577 00:30:48 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:01.577 00:30:48 -- nvmf/common.sh@7 -- # uname -s 00:22:01.577 00:30:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.577 00:30:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.577 00:30:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.577 00:30:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.577 00:30:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.577 00:30:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.577 00:30:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.577 00:30:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.577 00:30:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.577 00:30:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.577 00:30:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:22:01.577 00:30:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:22:01.577 00:30:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.577 00:30:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.577 00:30:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:01.577 00:30:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:01.577 00:30:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.577 00:30:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.577 00:30:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.577 00:30:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.577 00:30:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.577 00:30:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.577 00:30:48 -- paths/export.sh@5 -- # export PATH 00:22:01.577 00:30:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.577 00:30:48 -- nvmf/common.sh@46 -- # : 0 00:22:01.577 00:30:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:01.577 00:30:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:01.577 00:30:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:01.577 00:30:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.577 00:30:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.577 00:30:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:01.577 00:30:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:01.577 00:30:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:01.577 00:30:48 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:01.577 00:30:48 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:01.577 00:30:48 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:01.577 00:30:48 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:01.577 00:30:48 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:01.577 00:30:48 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:01.577 00:30:48 -- host/discovery.sh@25 -- # nvmftestinit 00:22:01.577 00:30:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:01.577 00:30:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.577 00:30:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:01.577 00:30:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:01.577 00:30:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:01.577 00:30:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.577 00:30:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.577 00:30:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.577 00:30:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:01.577 00:30:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:01.577 00:30:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:01.577 00:30:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:01.577 00:30:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:01.577 00:30:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:01.577 00:30:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.577 00:30:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.577 00:30:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:01.577 00:30:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:01.577 00:30:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:01.577 00:30:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:01.577 00:30:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:01.577 00:30:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.577 00:30:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:01.577 00:30:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:01.577 00:30:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:01.577 00:30:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:01.577 00:30:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:01.577 00:30:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:01.577 Cannot find device "nvmf_tgt_br" 00:22:01.577 00:30:48 -- nvmf/common.sh@154 -- # true 00:22:01.577 00:30:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:01.577 Cannot find device "nvmf_tgt_br2" 00:22:01.577 00:30:48 -- nvmf/common.sh@155 -- # true 00:22:01.577 00:30:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:01.577 00:30:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:01.577 Cannot find device "nvmf_tgt_br" 00:22:01.577 00:30:48 -- nvmf/common.sh@157 -- # true 00:22:01.577 00:30:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:01.836 Cannot find device "nvmf_tgt_br2" 00:22:01.836 00:30:48 -- nvmf/common.sh@158 -- # true 00:22:01.836 00:30:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:01.836 00:30:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:01.836 00:30:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:01.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:01.836 00:30:48 -- nvmf/common.sh@161 -- # true 00:22:01.836 00:30:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:01.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:01.836 00:30:48 -- nvmf/common.sh@162 -- # true 00:22:01.836 00:30:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:01.836 00:30:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:01.836 00:30:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:01.836 00:30:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:01.836 00:30:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:01.836 00:30:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:01.836 00:30:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:01.836 00:30:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:01.836 00:30:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:01.836 00:30:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:01.836 00:30:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:01.836 00:30:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:01.836 00:30:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:01.836 00:30:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:01.836 00:30:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:01.836 00:30:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:01.836 00:30:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:01.836 00:30:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:01.836 00:30:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:01.836 00:30:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:01.836 00:30:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:02.095 00:30:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:02.095 00:30:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:02.095 00:30:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:02.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:22:02.095 00:22:02.095 --- 10.0.0.2 ping statistics --- 00:22:02.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.095 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:02.095 00:30:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:02.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:02.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:22:02.095 00:22:02.095 --- 10.0.0.3 ping statistics --- 00:22:02.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.095 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:02.095 00:30:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:02.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:22:02.095 00:22:02.095 --- 10.0.0.1 ping statistics --- 00:22:02.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.095 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:22:02.095 00:30:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.095 00:30:49 -- nvmf/common.sh@421 -- # return 0 00:22:02.095 00:30:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:02.095 00:30:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.095 00:30:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:02.095 00:30:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:02.095 00:30:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.095 00:30:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:02.095 00:30:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:02.095 00:30:49 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:02.095 00:30:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:02.095 00:30:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:02.095 00:30:49 -- common/autotest_common.sh@10 -- # set +x 00:22:02.095 00:30:49 -- nvmf/common.sh@469 -- # nvmfpid=95690 00:22:02.095 00:30:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:02.095 00:30:49 -- nvmf/common.sh@470 -- # waitforlisten 95690 00:22:02.095 00:30:49 -- common/autotest_common.sh@819 -- # '[' -z 95690 ']' 00:22:02.096 00:30:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.096 00:30:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:02.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.096 00:30:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.096 00:30:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:02.096 00:30:49 -- common/autotest_common.sh@10 -- # set +x 00:22:02.096 [2024-07-13 00:30:49.172252] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:02.096 [2024-07-13 00:30:49.172369] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.096 [2024-07-13 00:30:49.316916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.354 [2024-07-13 00:30:49.424488] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:02.354 [2024-07-13 00:30:49.424721] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.354 [2024-07-13 00:30:49.424740] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.354 [2024-07-13 00:30:49.424751] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.354 [2024-07-13 00:30:49.424785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.291 00:30:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:03.291 00:30:50 -- common/autotest_common.sh@852 -- # return 0 00:22:03.291 00:30:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:03.291 00:30:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:03.291 00:30:50 -- common/autotest_common.sh@10 -- # set +x 00:22:03.291 00:30:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.291 00:30:50 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.291 00:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.291 00:30:50 -- common/autotest_common.sh@10 -- # set +x 00:22:03.291 [2024-07-13 00:30:50.246054] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.291 00:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.291 00:30:50 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:03.291 00:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.291 00:30:50 -- common/autotest_common.sh@10 -- # set +x 00:22:03.291 [2024-07-13 00:30:50.254158] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:03.291 00:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.291 00:30:50 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:03.291 00:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.291 00:30:50 -- common/autotest_common.sh@10 -- # set +x 00:22:03.291 null0 00:22:03.291 00:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.291 00:30:50 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:03.291 00:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.291 00:30:50 -- common/autotest_common.sh@10 -- # set +x 00:22:03.291 null1 00:22:03.291 00:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.291 00:30:50 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:03.291 00:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.291 00:30:50 -- common/autotest_common.sh@10 -- # set +x 00:22:03.291 00:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.291 00:30:50 -- host/discovery.sh@45 -- # hostpid=95745 00:22:03.291 00:30:50 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:03.291 00:30:50 -- host/discovery.sh@46 -- # waitforlisten 95745 /tmp/host.sock 00:22:03.291 00:30:50 -- common/autotest_common.sh@819 -- # '[' -z 95745 ']' 00:22:03.291 00:30:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:03.291 00:30:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:03.291 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:03.291 00:30:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:03.291 00:30:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:03.291 00:30:50 -- common/autotest_common.sh@10 -- # set +x 00:22:03.291 [2024-07-13 00:30:50.341807] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:03.291 [2024-07-13 00:30:50.341910] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95745 ] 00:22:03.291 [2024-07-13 00:30:50.485512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.550 [2024-07-13 00:30:50.585874] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:03.550 [2024-07-13 00:30:50.586104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.117 00:30:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:04.117 00:30:51 -- common/autotest_common.sh@852 -- # return 0 00:22:04.117 00:30:51 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:04.117 00:30:51 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:04.117 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.117 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.117 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.117 00:30:51 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:04.117 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.117 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.375 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.375 00:30:51 -- host/discovery.sh@72 -- # notify_id=0 00:22:04.375 00:30:51 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:04.375 00:30:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.375 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.375 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.375 00:30:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.375 00:30:51 -- host/discovery.sh@59 -- # sort 00:22:04.375 00:30:51 -- host/discovery.sh@59 -- # xargs 00:22:04.375 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.375 00:30:51 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:04.375 00:30:51 -- host/discovery.sh@79 -- # get_bdev_list 00:22:04.375 00:30:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.375 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.375 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.375 00:30:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.375 00:30:51 -- host/discovery.sh@55 -- # sort 00:22:04.375 00:30:51 -- host/discovery.sh@55 -- # xargs 00:22:04.375 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.375 00:30:51 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:04.375 00:30:51 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:04.375 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.375 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.375 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.375 00:30:51 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:04.375 00:30:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.375 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.375 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.375 00:30:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.375 00:30:51 -- host/discovery.sh@59 -- # sort 00:22:04.375 00:30:51 -- host/discovery.sh@59 -- # xargs 00:22:04.375 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.375 00:30:51 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:04.375 00:30:51 -- host/discovery.sh@83 -- # get_bdev_list 00:22:04.375 00:30:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.375 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.375 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.375 00:30:51 -- host/discovery.sh@55 -- # sort 00:22:04.375 00:30:51 -- host/discovery.sh@55 -- # xargs 00:22:04.375 00:30:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.375 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.375 00:30:51 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:04.375 00:30:51 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:04.375 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.375 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.375 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.634 00:30:51 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:04.634 00:30:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.634 00:30:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.634 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.634 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.634 00:30:51 -- host/discovery.sh@59 -- # sort 00:22:04.634 00:30:51 -- host/discovery.sh@59 -- # xargs 00:22:04.634 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.634 00:30:51 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:04.634 00:30:51 -- host/discovery.sh@87 -- # get_bdev_list 00:22:04.634 00:30:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.634 00:30:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.634 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.634 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.634 00:30:51 -- host/discovery.sh@55 -- # sort 00:22:04.634 00:30:51 -- host/discovery.sh@55 -- # xargs 00:22:04.634 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.634 00:30:51 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:04.634 00:30:51 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:04.634 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.634 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.634 [2024-07-13 00:30:51.726497] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.634 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.634 00:30:51 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:04.634 00:30:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.634 00:30:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.634 00:30:51 -- host/discovery.sh@59 -- # sort 00:22:04.634 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.634 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.634 00:30:51 -- host/discovery.sh@59 -- # xargs 00:22:04.634 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.634 00:30:51 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:04.634 00:30:51 -- host/discovery.sh@93 -- # get_bdev_list 00:22:04.634 00:30:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.634 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.634 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.634 00:30:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.634 00:30:51 -- host/discovery.sh@55 -- # sort 00:22:04.634 00:30:51 -- host/discovery.sh@55 -- # xargs 00:22:04.634 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.634 00:30:51 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:04.634 00:30:51 -- host/discovery.sh@94 -- # get_notification_count 00:22:04.634 00:30:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:04.634 00:30:51 -- host/discovery.sh@74 -- # jq '. | length' 00:22:04.634 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.634 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.634 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.892 00:30:51 -- host/discovery.sh@74 -- # notification_count=0 00:22:04.892 00:30:51 -- host/discovery.sh@75 -- # notify_id=0 00:22:04.892 00:30:51 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:04.892 00:30:51 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:04.892 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.892 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:22:04.892 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:04.892 00:30:51 -- host/discovery.sh@100 -- # sleep 1 00:22:05.152 [2024-07-13 00:30:52.349327] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:05.152 [2024-07-13 00:30:52.349390] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:05.152 [2024-07-13 00:30:52.349413] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:05.411 [2024-07-13 00:30:52.435414] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:05.411 [2024-07-13 00:30:52.491299] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:05.411 [2024-07-13 00:30:52.491336] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:05.977 00:30:52 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:05.977 00:30:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.977 00:30:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.977 00:30:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.977 00:30:52 -- common/autotest_common.sh@10 -- # set +x 00:22:05.977 00:30:52 -- host/discovery.sh@59 -- # xargs 00:22:05.977 00:30:52 -- host/discovery.sh@59 -- # sort 00:22:05.977 00:30:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.977 00:30:52 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.977 00:30:52 -- host/discovery.sh@102 -- # get_bdev_list 00:22:05.977 00:30:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.977 00:30:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.977 00:30:52 -- host/discovery.sh@55 -- # sort 00:22:05.977 00:30:52 -- host/discovery.sh@55 -- # xargs 00:22:05.977 00:30:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.977 00:30:52 -- common/autotest_common.sh@10 -- # set +x 00:22:05.977 00:30:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.977 00:30:53 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:05.977 00:30:53 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:05.977 00:30:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:05.977 00:30:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.977 00:30:53 -- common/autotest_common.sh@10 -- # set +x 00:22:05.977 00:30:53 -- host/discovery.sh@63 -- # sort -n 00:22:05.977 00:30:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:05.977 00:30:53 -- host/discovery.sh@63 -- # xargs 00:22:05.977 00:30:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.977 00:30:53 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:05.977 00:30:53 -- host/discovery.sh@104 -- # get_notification_count 00:22:05.977 00:30:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:05.977 00:30:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.977 00:30:53 -- host/discovery.sh@74 -- # jq '. | length' 00:22:05.977 00:30:53 -- common/autotest_common.sh@10 -- # set +x 00:22:05.977 00:30:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.977 00:30:53 -- host/discovery.sh@74 -- # notification_count=1 00:22:05.977 00:30:53 -- host/discovery.sh@75 -- # notify_id=1 00:22:05.977 00:30:53 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:05.977 00:30:53 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:05.977 00:30:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.977 00:30:53 -- common/autotest_common.sh@10 -- # set +x 00:22:05.977 00:30:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.977 00:30:53 -- host/discovery.sh@109 -- # sleep 1 00:22:07.350 00:30:54 -- host/discovery.sh@110 -- # get_bdev_list 00:22:07.350 00:30:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.350 00:30:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:07.350 00:30:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.350 00:30:54 -- common/autotest_common.sh@10 -- # set +x 00:22:07.350 00:30:54 -- host/discovery.sh@55 -- # sort 00:22:07.350 00:30:54 -- host/discovery.sh@55 -- # xargs 00:22:07.350 00:30:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.350 00:30:54 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:07.350 00:30:54 -- host/discovery.sh@111 -- # get_notification_count 00:22:07.350 00:30:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:07.350 00:30:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.350 00:30:54 -- common/autotest_common.sh@10 -- # set +x 00:22:07.350 00:30:54 -- host/discovery.sh@74 -- # jq '. | length' 00:22:07.350 00:30:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.350 00:30:54 -- host/discovery.sh@74 -- # notification_count=1 00:22:07.350 00:30:54 -- host/discovery.sh@75 -- # notify_id=2 00:22:07.350 00:30:54 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:07.350 00:30:54 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:07.350 00:30:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:07.350 00:30:54 -- common/autotest_common.sh@10 -- # set +x 00:22:07.350 [2024-07-13 00:30:54.279625] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:07.350 [2024-07-13 00:30:54.280164] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:07.350 [2024-07-13 00:30:54.280198] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:07.350 00:30:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.350 00:30:54 -- host/discovery.sh@117 -- # sleep 1 00:22:07.350 [2024-07-13 00:30:54.366198] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:07.350 [2024-07-13 00:30:54.423520] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:07.350 [2024-07-13 00:30:54.423552] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:07.350 [2024-07-13 00:30:54.423577] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:08.285 00:30:55 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:08.285 00:30:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:08.285 00:30:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:08.285 00:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.285 00:30:55 -- common/autotest_common.sh@10 -- # set +x 00:22:08.285 00:30:55 -- host/discovery.sh@59 -- # sort 00:22:08.285 00:30:55 -- host/discovery.sh@59 -- # xargs 00:22:08.285 00:30:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.285 00:30:55 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.285 00:30:55 -- host/discovery.sh@119 -- # get_bdev_list 00:22:08.285 00:30:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.285 00:30:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:08.285 00:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.285 00:30:55 -- common/autotest_common.sh@10 -- # set +x 00:22:08.285 00:30:55 -- host/discovery.sh@55 -- # sort 00:22:08.286 00:30:55 -- host/discovery.sh@55 -- # xargs 00:22:08.286 00:30:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.286 00:30:55 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:08.286 00:30:55 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:08.286 00:30:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:08.286 00:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.286 00:30:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:08.286 00:30:55 -- common/autotest_common.sh@10 -- # set +x 00:22:08.286 00:30:55 -- host/discovery.sh@63 -- # sort -n 00:22:08.286 00:30:55 -- host/discovery.sh@63 -- # xargs 00:22:08.286 00:30:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.286 00:30:55 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:08.286 00:30:55 -- host/discovery.sh@121 -- # get_notification_count 00:22:08.286 00:30:55 -- host/discovery.sh@74 -- # jq '. | length' 00:22:08.286 00:30:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:08.286 00:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.286 00:30:55 -- common/autotest_common.sh@10 -- # set +x 00:22:08.286 00:30:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.286 00:30:55 -- host/discovery.sh@74 -- # notification_count=0 00:22:08.286 00:30:55 -- host/discovery.sh@75 -- # notify_id=2 00:22:08.543 00:30:55 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:08.543 00:30:55 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:08.543 00:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.543 00:30:55 -- common/autotest_common.sh@10 -- # set +x 00:22:08.543 [2024-07-13 00:30:55.520709] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:08.543 [2024-07-13 00:30:55.520757] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:08.543 00:30:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.543 00:30:55 -- host/discovery.sh@127 -- # sleep 1 00:22:08.543 [2024-07-13 00:30:55.529523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.543 [2024-07-13 00:30:55.529578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.543 [2024-07-13 00:30:55.529593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.543 [2024-07-13 00:30:55.529603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.543 [2024-07-13 00:30:55.529613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.543 [2024-07-13 00:30:55.529640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.543 [2024-07-13 00:30:55.529664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.543 [2024-07-13 00:30:55.529675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.543 [2024-07-13 00:30:55.529686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e150 is same with the state(5) to be set 00:22:08.543 [2024-07-13 00:30:55.539461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191e150 (9): Bad file descriptor 00:22:08.543 [2024-07-13 00:30:55.549482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:08.543 [2024-07-13 00:30:55.549651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.543 [2024-07-13 00:30:55.549711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.543 [2024-07-13 00:30:55.549731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191e150 with addr=10.0.0.2, port=4420 00:22:08.543 [2024-07-13 00:30:55.549743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e150 is same with the state(5) to be set 00:22:08.543 [2024-07-13 00:30:55.549762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191e150 (9): Bad file descriptor 00:22:08.543 [2024-07-13 00:30:55.549792] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:08.543 [2024-07-13 00:30:55.549804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:08.543 [2024-07-13 00:30:55.549816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:08.543 [2024-07-13 00:30:55.549833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:08.543 [2024-07-13 00:30:55.559562] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:08.543 [2024-07-13 00:30:55.559676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.543 [2024-07-13 00:30:55.559727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.544 [2024-07-13 00:30:55.559746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191e150 with addr=10.0.0.2, port=4420 00:22:08.544 [2024-07-13 00:30:55.559773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e150 is same with the state(5) to be set 00:22:08.544 [2024-07-13 00:30:55.559825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191e150 (9): Bad file descriptor 00:22:08.544 [2024-07-13 00:30:55.559854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:08.544 [2024-07-13 00:30:55.559867] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:08.544 [2024-07-13 00:30:55.559878] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:08.544 [2024-07-13 00:30:55.559895] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:08.544 [2024-07-13 00:30:55.569645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:08.544 [2024-07-13 00:30:55.569756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.544 [2024-07-13 00:30:55.569807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.544 [2024-07-13 00:30:55.569826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191e150 with addr=10.0.0.2, port=4420 00:22:08.544 [2024-07-13 00:30:55.569843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e150 is same with the state(5) to be set 00:22:08.544 [2024-07-13 00:30:55.569861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191e150 (9): Bad file descriptor 00:22:08.544 [2024-07-13 00:30:55.569890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:08.544 [2024-07-13 00:30:55.569902] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:08.544 [2024-07-13 00:30:55.569912] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:08.544 [2024-07-13 00:30:55.569927] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:08.544 [2024-07-13 00:30:55.579719] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:08.544 [2024-07-13 00:30:55.579821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.544 [2024-07-13 00:30:55.579871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.544 [2024-07-13 00:30:55.579889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191e150 with addr=10.0.0.2, port=4420 00:22:08.544 [2024-07-13 00:30:55.579900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e150 is same with the state(5) to be set 00:22:08.544 [2024-07-13 00:30:55.579949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191e150 (9): Bad file descriptor 00:22:08.544 [2024-07-13 00:30:55.579995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:08.544 [2024-07-13 00:30:55.580007] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:08.544 [2024-07-13 00:30:55.580017] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:08.544 [2024-07-13 00:30:55.580033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:08.544 [2024-07-13 00:30:55.589788] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:08.544 [2024-07-13 00:30:55.589890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.544 [2024-07-13 00:30:55.589941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.544 [2024-07-13 00:30:55.589975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191e150 with addr=10.0.0.2, port=4420 00:22:08.544 [2024-07-13 00:30:55.589986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e150 is same with the state(5) to be set 00:22:08.544 [2024-07-13 00:30:55.590003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191e150 (9): Bad file descriptor 00:22:08.544 [2024-07-13 00:30:55.590031] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:08.544 [2024-07-13 00:30:55.590043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:08.544 [2024-07-13 00:30:55.590052] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:08.544 [2024-07-13 00:30:55.590067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:08.544 [2024-07-13 00:30:55.599859] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:08.544 [2024-07-13 00:30:55.599962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.544 [2024-07-13 00:30:55.600012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.544 [2024-07-13 00:30:55.600030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191e150 with addr=10.0.0.2, port=4420 00:22:08.544 [2024-07-13 00:30:55.600041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e150 is same with the state(5) to be set 00:22:08.544 [2024-07-13 00:30:55.600074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191e150 (9): Bad file descriptor 00:22:08.544 [2024-07-13 00:30:55.600136] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:08.544 [2024-07-13 00:30:55.600149] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:08.544 [2024-07-13 00:30:55.600159] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:08.544 [2024-07-13 00:30:55.600175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:08.544 [2024-07-13 00:30:55.607114] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:08.544 [2024-07-13 00:30:55.607169] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:09.479 00:30:56 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:09.479 00:30:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:09.479 00:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.479 00:30:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:09.479 00:30:56 -- common/autotest_common.sh@10 -- # set +x 00:22:09.479 00:30:56 -- host/discovery.sh@59 -- # sort 00:22:09.479 00:30:56 -- host/discovery.sh@59 -- # xargs 00:22:09.479 00:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.479 00:30:56 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.479 00:30:56 -- host/discovery.sh@129 -- # get_bdev_list 00:22:09.479 00:30:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.479 00:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.479 00:30:56 -- common/autotest_common.sh@10 -- # set +x 00:22:09.479 00:30:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.479 00:30:56 -- host/discovery.sh@55 -- # xargs 00:22:09.479 00:30:56 -- host/discovery.sh@55 -- # sort 00:22:09.479 00:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.479 00:30:56 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:09.479 00:30:56 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:09.479 00:30:56 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:09.479 00:30:56 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:09.479 00:30:56 -- host/discovery.sh@63 -- # sort -n 00:22:09.479 00:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.479 00:30:56 -- host/discovery.sh@63 -- # xargs 00:22:09.479 00:30:56 -- common/autotest_common.sh@10 -- # set +x 00:22:09.479 00:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.479 00:30:56 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:09.479 00:30:56 -- host/discovery.sh@131 -- # get_notification_count 00:22:09.479 00:30:56 -- host/discovery.sh@74 -- # jq '. | length' 00:22:09.480 00:30:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:09.480 00:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.480 00:30:56 -- common/autotest_common.sh@10 -- # set +x 00:22:09.739 00:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.739 00:30:56 -- host/discovery.sh@74 -- # notification_count=0 00:22:09.739 00:30:56 -- host/discovery.sh@75 -- # notify_id=2 00:22:09.739 00:30:56 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:09.739 00:30:56 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:09.739 00:30:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.739 00:30:56 -- common/autotest_common.sh@10 -- # set +x 00:22:09.739 00:30:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.739 00:30:56 -- host/discovery.sh@135 -- # sleep 1 00:22:10.674 00:30:57 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:10.674 00:30:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:10.674 00:30:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:10.674 00:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.674 00:30:57 -- common/autotest_common.sh@10 -- # set +x 00:22:10.674 00:30:57 -- host/discovery.sh@59 -- # sort 00:22:10.674 00:30:57 -- host/discovery.sh@59 -- # xargs 00:22:10.674 00:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.674 00:30:57 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:10.674 00:30:57 -- host/discovery.sh@137 -- # get_bdev_list 00:22:10.674 00:30:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.674 00:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.674 00:30:57 -- common/autotest_common.sh@10 -- # set +x 00:22:10.674 00:30:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:10.674 00:30:57 -- host/discovery.sh@55 -- # sort 00:22:10.674 00:30:57 -- host/discovery.sh@55 -- # xargs 00:22:10.674 00:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.674 00:30:57 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:10.674 00:30:57 -- host/discovery.sh@138 -- # get_notification_count 00:22:10.933 00:30:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:10.933 00:30:57 -- host/discovery.sh@74 -- # jq '. | length' 00:22:10.933 00:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.933 00:30:57 -- common/autotest_common.sh@10 -- # set +x 00:22:10.933 00:30:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.933 00:30:57 -- host/discovery.sh@74 -- # notification_count=2 00:22:10.933 00:30:57 -- host/discovery.sh@75 -- # notify_id=4 00:22:10.933 00:30:57 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:10.933 00:30:57 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:10.933 00:30:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.933 00:30:57 -- common/autotest_common.sh@10 -- # set +x 00:22:11.868 [2024-07-13 00:30:58.970396] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:11.868 [2024-07-13 00:30:58.970447] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:11.868 [2024-07-13 00:30:58.970484] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:11.868 [2024-07-13 00:30:59.056540] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:12.127 [2024-07-13 00:30:59.115979] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:12.127 [2024-07-13 00:30:59.116061] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:12.127 00:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.127 00:30:59 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:12.127 00:30:59 -- common/autotest_common.sh@640 -- # local es=0 00:22:12.127 00:30:59 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:12.127 00:30:59 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:12.127 00:30:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:12.127 00:30:59 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:12.127 00:30:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:12.127 00:30:59 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:12.127 00:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.127 00:30:59 -- common/autotest_common.sh@10 -- # set +x 00:22:12.127 request: 00:22:12.127 { 00:22:12.127 2024/07/13 00:30:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:12.127 "method": "bdev_nvme_start_discovery", 00:22:12.127 "params": { 00:22:12.127 "name": "nvme", 00:22:12.127 "trtype": "tcp", 00:22:12.127 "traddr": "10.0.0.2", 00:22:12.127 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:12.127 "adrfam": "ipv4", 00:22:12.127 "trsvcid": "8009", 00:22:12.127 "wait_for_attach": true 00:22:12.127 } 00:22:12.127 } 00:22:12.127 Got JSON-RPC error response 00:22:12.127 GoRPCClient: error on JSON-RPC call 00:22:12.127 00:30:59 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:12.127 00:30:59 -- common/autotest_common.sh@643 -- # es=1 00:22:12.127 00:30:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:12.127 00:30:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:12.127 00:30:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:12.127 00:30:59 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:12.127 00:30:59 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:12.127 00:30:59 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:12.127 00:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.127 00:30:59 -- common/autotest_common.sh@10 -- # set +x 00:22:12.127 00:30:59 -- host/discovery.sh@67 -- # sort 00:22:12.127 00:30:59 -- host/discovery.sh@67 -- # xargs 00:22:12.127 00:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.127 00:30:59 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:12.127 00:30:59 -- host/discovery.sh@147 -- # get_bdev_list 00:22:12.127 00:30:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.127 00:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.127 00:30:59 -- common/autotest_common.sh@10 -- # set +x 00:22:12.127 00:30:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:12.127 00:30:59 -- host/discovery.sh@55 -- # sort 00:22:12.127 00:30:59 -- host/discovery.sh@55 -- # xargs 00:22:12.127 00:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.127 00:30:59 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:12.127 00:30:59 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:12.127 00:30:59 -- common/autotest_common.sh@640 -- # local es=0 00:22:12.127 00:30:59 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:12.127 00:30:59 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:12.127 00:30:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:12.127 00:30:59 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:12.127 00:30:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:12.127 00:30:59 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:12.127 00:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.127 00:30:59 -- common/autotest_common.sh@10 -- # set +x 00:22:12.127 2024/07/13 00:30:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:12.127 request: 00:22:12.127 { 00:22:12.127 "method": "bdev_nvme_start_discovery", 00:22:12.127 "params": { 00:22:12.127 "name": "nvme_second", 00:22:12.127 "trtype": "tcp", 00:22:12.127 "traddr": "10.0.0.2", 00:22:12.128 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:12.128 "adrfam": "ipv4", 00:22:12.128 "trsvcid": "8009", 00:22:12.128 "wait_for_attach": true 00:22:12.128 } 00:22:12.128 } 00:22:12.128 Got JSON-RPC error response 00:22:12.128 GoRPCClient: error on JSON-RPC call 00:22:12.128 00:30:59 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:12.128 00:30:59 -- common/autotest_common.sh@643 -- # es=1 00:22:12.128 00:30:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:12.128 00:30:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:12.128 00:30:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:12.128 00:30:59 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:12.128 00:30:59 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:12.128 00:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.128 00:30:59 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:12.128 00:30:59 -- common/autotest_common.sh@10 -- # set +x 00:22:12.128 00:30:59 -- host/discovery.sh@67 -- # sort 00:22:12.128 00:30:59 -- host/discovery.sh@67 -- # xargs 00:22:12.128 00:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.128 00:30:59 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:12.128 00:30:59 -- host/discovery.sh@153 -- # get_bdev_list 00:22:12.128 00:30:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.128 00:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.128 00:30:59 -- common/autotest_common.sh@10 -- # set +x 00:22:12.128 00:30:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:12.128 00:30:59 -- host/discovery.sh@55 -- # xargs 00:22:12.128 00:30:59 -- host/discovery.sh@55 -- # sort 00:22:12.387 00:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.387 00:30:59 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:12.387 00:30:59 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:12.387 00:30:59 -- common/autotest_common.sh@640 -- # local es=0 00:22:12.387 00:30:59 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:12.387 00:30:59 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:12.387 00:30:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:12.387 00:30:59 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:12.387 00:30:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:12.387 00:30:59 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:12.387 00:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.387 00:30:59 -- common/autotest_common.sh@10 -- # set +x 00:22:13.325 [2024-07-13 00:31:00.401926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.325 [2024-07-13 00:31:00.402056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.325 [2024-07-13 00:31:00.402078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c4000 with addr=10.0.0.2, port=8010 00:22:13.325 [2024-07-13 00:31:00.402102] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:13.325 [2024-07-13 00:31:00.402115] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:13.325 [2024-07-13 00:31:00.402125] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:14.261 [2024-07-13 00:31:01.401918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.261 [2024-07-13 00:31:01.402031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.261 [2024-07-13 00:31:01.402052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c4000 with addr=10.0.0.2, port=8010 00:22:14.261 [2024-07-13 00:31:01.402075] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:14.261 [2024-07-13 00:31:01.402085] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:14.261 [2024-07-13 00:31:01.402095] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:15.199 [2024-07-13 00:31:02.401744] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:15.199 2024/07/13 00:31:02 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:15.199 request: 00:22:15.199 { 00:22:15.199 "method": "bdev_nvme_start_discovery", 00:22:15.199 "params": { 00:22:15.199 "name": "nvme_second", 00:22:15.199 "trtype": "tcp", 00:22:15.199 "traddr": "10.0.0.2", 00:22:15.199 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:15.199 "adrfam": "ipv4", 00:22:15.199 "trsvcid": "8010", 00:22:15.199 "attach_timeout_ms": 3000 00:22:15.199 } 00:22:15.199 } 00:22:15.199 Got JSON-RPC error response 00:22:15.199 GoRPCClient: error on JSON-RPC call 00:22:15.199 00:31:02 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:15.199 00:31:02 -- common/autotest_common.sh@643 -- # es=1 00:22:15.199 00:31:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:15.199 00:31:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:15.199 00:31:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:15.199 00:31:02 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:15.199 00:31:02 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:15.199 00:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.199 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:22:15.199 00:31:02 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:15.199 00:31:02 -- host/discovery.sh@67 -- # sort 00:22:15.199 00:31:02 -- host/discovery.sh@67 -- # xargs 00:22:15.199 00:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.458 00:31:02 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:15.458 00:31:02 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:15.458 00:31:02 -- host/discovery.sh@162 -- # kill 95745 00:22:15.458 00:31:02 -- host/discovery.sh@163 -- # nvmftestfini 00:22:15.458 00:31:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:15.458 00:31:02 -- nvmf/common.sh@116 -- # sync 00:22:15.458 00:31:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:15.458 00:31:02 -- nvmf/common.sh@119 -- # set +e 00:22:15.458 00:31:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:15.458 00:31:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:15.458 rmmod nvme_tcp 00:22:15.458 rmmod nvme_fabrics 00:22:15.458 rmmod nvme_keyring 00:22:15.458 00:31:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:15.458 00:31:02 -- nvmf/common.sh@123 -- # set -e 00:22:15.458 00:31:02 -- nvmf/common.sh@124 -- # return 0 00:22:15.458 00:31:02 -- nvmf/common.sh@477 -- # '[' -n 95690 ']' 00:22:15.458 00:31:02 -- nvmf/common.sh@478 -- # killprocess 95690 00:22:15.458 00:31:02 -- common/autotest_common.sh@926 -- # '[' -z 95690 ']' 00:22:15.458 00:31:02 -- common/autotest_common.sh@930 -- # kill -0 95690 00:22:15.458 00:31:02 -- common/autotest_common.sh@931 -- # uname 00:22:15.458 00:31:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:15.458 00:31:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95690 00:22:15.458 00:31:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:15.458 killing process with pid 95690 00:22:15.458 00:31:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:15.458 00:31:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95690' 00:22:15.458 00:31:02 -- common/autotest_common.sh@945 -- # kill 95690 00:22:15.458 00:31:02 -- common/autotest_common.sh@950 -- # wait 95690 00:22:15.718 00:31:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:15.718 00:31:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:15.718 00:31:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:15.718 00:31:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.718 00:31:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:15.718 00:31:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.718 00:31:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.718 00:31:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.718 00:31:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:15.718 00:22:15.718 real 0m14.275s 00:22:15.718 user 0m27.845s 00:22:15.718 sys 0m1.819s 00:22:15.718 00:31:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:15.718 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:22:15.718 ************************************ 00:22:15.718 END TEST nvmf_discovery 00:22:15.718 ************************************ 00:22:15.977 00:31:02 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:15.977 00:31:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:15.977 00:31:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:15.977 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:22:15.977 ************************************ 00:22:15.977 START TEST nvmf_discovery_remove_ifc 00:22:15.977 ************************************ 00:22:15.977 00:31:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:15.977 * Looking for test storage... 00:22:15.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:15.977 00:31:03 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:15.977 00:31:03 -- nvmf/common.sh@7 -- # uname -s 00:22:15.977 00:31:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.977 00:31:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.977 00:31:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.977 00:31:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.977 00:31:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.977 00:31:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.977 00:31:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.977 00:31:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.977 00:31:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.977 00:31:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.977 00:31:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:22:15.977 00:31:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:22:15.977 00:31:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.977 00:31:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.977 00:31:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:15.977 00:31:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:15.977 00:31:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.977 00:31:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.977 00:31:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.977 00:31:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.977 00:31:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.977 00:31:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.977 00:31:03 -- paths/export.sh@5 -- # export PATH 00:22:15.977 00:31:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.977 00:31:03 -- nvmf/common.sh@46 -- # : 0 00:22:15.977 00:31:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:15.977 00:31:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:15.977 00:31:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:15.977 00:31:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.977 00:31:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.977 00:31:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:15.977 00:31:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:15.977 00:31:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:15.977 00:31:03 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:15.977 00:31:03 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:15.977 00:31:03 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:15.977 00:31:03 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:15.977 00:31:03 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:15.977 00:31:03 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:15.977 00:31:03 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:15.977 00:31:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:15.977 00:31:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.977 00:31:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:15.977 00:31:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:15.977 00:31:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:15.977 00:31:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.977 00:31:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.977 00:31:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.977 00:31:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:15.977 00:31:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:15.977 00:31:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:15.977 00:31:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:15.977 00:31:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:15.977 00:31:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:15.977 00:31:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.977 00:31:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.977 00:31:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:15.977 00:31:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:15.977 00:31:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:15.977 00:31:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:15.977 00:31:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:15.977 00:31:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.977 00:31:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:15.977 00:31:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:15.977 00:31:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:15.977 00:31:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:15.977 00:31:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:15.977 00:31:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:15.977 Cannot find device "nvmf_tgt_br" 00:22:15.977 00:31:03 -- nvmf/common.sh@154 -- # true 00:22:15.977 00:31:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:15.977 Cannot find device "nvmf_tgt_br2" 00:22:15.977 00:31:03 -- nvmf/common.sh@155 -- # true 00:22:15.977 00:31:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:15.977 00:31:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:15.977 Cannot find device "nvmf_tgt_br" 00:22:15.977 00:31:03 -- nvmf/common.sh@157 -- # true 00:22:15.977 00:31:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:15.977 Cannot find device "nvmf_tgt_br2" 00:22:15.977 00:31:03 -- nvmf/common.sh@158 -- # true 00:22:15.977 00:31:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:15.977 00:31:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:16.235 00:31:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:16.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.235 00:31:03 -- nvmf/common.sh@161 -- # true 00:22:16.235 00:31:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:16.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.235 00:31:03 -- nvmf/common.sh@162 -- # true 00:22:16.235 00:31:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:16.235 00:31:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:16.235 00:31:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:16.235 00:31:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:16.235 00:31:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:16.235 00:31:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:16.235 00:31:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:16.235 00:31:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:16.235 00:31:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:16.235 00:31:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:16.235 00:31:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:16.235 00:31:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:16.235 00:31:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:16.235 00:31:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:16.235 00:31:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:16.235 00:31:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:16.235 00:31:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:16.235 00:31:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:16.235 00:31:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:16.235 00:31:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:16.235 00:31:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:16.235 00:31:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:16.235 00:31:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:16.235 00:31:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:16.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:22:16.235 00:22:16.235 --- 10.0.0.2 ping statistics --- 00:22:16.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.235 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:16.235 00:31:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:16.235 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:16.235 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:22:16.235 00:22:16.235 --- 10.0.0.3 ping statistics --- 00:22:16.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.235 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:16.235 00:31:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:16.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:22:16.235 00:22:16.235 --- 10.0.0.1 ping statistics --- 00:22:16.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.235 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:22:16.235 00:31:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.235 00:31:03 -- nvmf/common.sh@421 -- # return 0 00:22:16.235 00:31:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:16.235 00:31:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.235 00:31:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:16.235 00:31:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:16.235 00:31:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.235 00:31:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:16.235 00:31:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:16.235 00:31:03 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:16.235 00:31:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:16.235 00:31:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:16.235 00:31:03 -- common/autotest_common.sh@10 -- # set +x 00:22:16.235 00:31:03 -- nvmf/common.sh@469 -- # nvmfpid=96248 00:22:16.235 00:31:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:16.235 00:31:03 -- nvmf/common.sh@470 -- # waitforlisten 96248 00:22:16.235 00:31:03 -- common/autotest_common.sh@819 -- # '[' -z 96248 ']' 00:22:16.235 00:31:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.235 00:31:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:16.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.235 00:31:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.235 00:31:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:16.235 00:31:03 -- common/autotest_common.sh@10 -- # set +x 00:22:16.493 [2024-07-13 00:31:03.499752] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:16.493 [2024-07-13 00:31:03.499858] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.493 [2024-07-13 00:31:03.637592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.751 [2024-07-13 00:31:03.735339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:16.751 [2024-07-13 00:31:03.735502] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.751 [2024-07-13 00:31:03.735516] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.751 [2024-07-13 00:31:03.735525] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.751 [2024-07-13 00:31:03.735561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.325 00:31:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:17.325 00:31:04 -- common/autotest_common.sh@852 -- # return 0 00:22:17.325 00:31:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:17.325 00:31:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:17.325 00:31:04 -- common/autotest_common.sh@10 -- # set +x 00:22:17.325 00:31:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.325 00:31:04 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:17.325 00:31:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.325 00:31:04 -- common/autotest_common.sh@10 -- # set +x 00:22:17.325 [2024-07-13 00:31:04.554948] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.583 [2024-07-13 00:31:04.563103] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:17.583 null0 00:22:17.583 [2024-07-13 00:31:04.594987] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.583 00:31:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.583 00:31:04 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96304 00:22:17.583 00:31:04 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:17.583 00:31:04 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96304 /tmp/host.sock 00:22:17.583 00:31:04 -- common/autotest_common.sh@819 -- # '[' -z 96304 ']' 00:22:17.583 00:31:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:17.583 00:31:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:17.583 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:17.583 00:31:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:17.583 00:31:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:17.583 00:31:04 -- common/autotest_common.sh@10 -- # set +x 00:22:17.583 [2024-07-13 00:31:04.672775] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:17.583 [2024-07-13 00:31:04.672906] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96304 ] 00:22:17.842 [2024-07-13 00:31:04.814182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.842 [2024-07-13 00:31:04.913551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:17.842 [2024-07-13 00:31:04.913782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.779 00:31:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:18.779 00:31:05 -- common/autotest_common.sh@852 -- # return 0 00:22:18.779 00:31:05 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.779 00:31:05 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:18.779 00:31:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.779 00:31:05 -- common/autotest_common.sh@10 -- # set +x 00:22:18.779 00:31:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.779 00:31:05 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:18.779 00:31:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.779 00:31:05 -- common/autotest_common.sh@10 -- # set +x 00:22:18.779 00:31:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.779 00:31:05 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:18.779 00:31:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.779 00:31:05 -- common/autotest_common.sh@10 -- # set +x 00:22:19.714 [2024-07-13 00:31:06.768139] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:19.714 [2024-07-13 00:31:06.768206] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:19.714 [2024-07-13 00:31:06.768225] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:19.714 [2024-07-13 00:31:06.854296] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:19.714 [2024-07-13 00:31:06.910250] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:19.714 [2024-07-13 00:31:06.910341] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:19.714 [2024-07-13 00:31:06.910370] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:19.714 [2024-07-13 00:31:06.910386] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:19.714 [2024-07-13 00:31:06.910412] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:19.714 00:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.714 00:31:06 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:19.714 00:31:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:19.714 [2024-07-13 00:31:06.916481] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7c9530 was disconnected and freed. delete nvme_qpair. 00:22:19.714 00:31:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.714 00:31:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:19.714 00:31:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:19.714 00:31:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:19.714 00:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.714 00:31:06 -- common/autotest_common.sh@10 -- # set +x 00:22:19.714 00:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.973 00:31:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:19.973 00:31:06 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:19.973 00:31:06 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:19.973 00:31:06 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:19.973 00:31:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:19.973 00:31:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:19.973 00:31:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.973 00:31:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:19.973 00:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.973 00:31:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:19.973 00:31:06 -- common/autotest_common.sh@10 -- # set +x 00:22:19.973 00:31:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.973 00:31:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:19.973 00:31:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:20.910 00:31:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:20.910 00:31:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.910 00:31:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:20.910 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.910 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:22:20.910 00:31:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:20.910 00:31:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:20.911 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.911 00:31:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:20.911 00:31:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:22.284 00:31:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:22.284 00:31:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.284 00:31:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.284 00:31:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:22.284 00:31:09 -- common/autotest_common.sh@10 -- # set +x 00:22:22.284 00:31:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:22.284 00:31:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:22.284 00:31:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.284 00:31:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:22.284 00:31:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:23.217 00:31:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:23.217 00:31:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.217 00:31:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:23.217 00:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.217 00:31:10 -- common/autotest_common.sh@10 -- # set +x 00:22:23.217 00:31:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:23.217 00:31:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:23.217 00:31:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.217 00:31:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:23.217 00:31:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:24.150 00:31:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:24.150 00:31:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.150 00:31:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:24.150 00:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.150 00:31:11 -- common/autotest_common.sh@10 -- # set +x 00:22:24.150 00:31:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:24.150 00:31:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:24.150 00:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.150 00:31:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:24.150 00:31:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:25.083 00:31:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:25.083 00:31:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.083 00:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.083 00:31:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:25.083 00:31:12 -- common/autotest_common.sh@10 -- # set +x 00:22:25.083 00:31:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:25.083 00:31:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:25.341 00:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.341 [2024-07-13 00:31:12.338348] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:25.341 [2024-07-13 00:31:12.338443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.341 [2024-07-13 00:31:12.338460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.341 [2024-07-13 00:31:12.338480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.341 [2024-07-13 00:31:12.338489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.341 [2024-07-13 00:31:12.338498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.341 [2024-07-13 00:31:12.338507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.341 [2024-07-13 00:31:12.338517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.341 [2024-07-13 00:31:12.338525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.341 [2024-07-13 00:31:12.338534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.341 [2024-07-13 00:31:12.338543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.341 [2024-07-13 00:31:12.338552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78fc50 is same with the state(5) to be set 00:22:25.341 [2024-07-13 00:31:12.348343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78fc50 (9): Bad file descriptor 00:22:25.341 00:31:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:25.341 00:31:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:25.341 [2024-07-13 00:31:12.358363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.273 00:31:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:26.273 00:31:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.273 00:31:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.273 00:31:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:26.273 00:31:13 -- common/autotest_common.sh@10 -- # set +x 00:22:26.274 00:31:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:26.274 00:31:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:26.274 [2024-07-13 00:31:13.413743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:27.648 [2024-07-13 00:31:14.438732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:27.648 [2024-07-13 00:31:14.438866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78fc50 with addr=10.0.0.2, port=4420 00:22:27.648 [2024-07-13 00:31:14.438912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78fc50 is same with the state(5) to be set 00:22:27.648 [2024-07-13 00:31:14.438974] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:27.648 [2024-07-13 00:31:14.438998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:27.648 [2024-07-13 00:31:14.439017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:27.648 [2024-07-13 00:31:14.439038] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:27.648 [2024-07-13 00:31:14.439947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78fc50 (9): Bad file descriptor 00:22:27.648 [2024-07-13 00:31:14.440043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:27.648 [2024-07-13 00:31:14.440105] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:27.648 [2024-07-13 00:31:14.440173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.648 [2024-07-13 00:31:14.440203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.648 [2024-07-13 00:31:14.440230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.648 [2024-07-13 00:31:14.440262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.648 [2024-07-13 00:31:14.440297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.648 [2024-07-13 00:31:14.440317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.648 [2024-07-13 00:31:14.440341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.648 [2024-07-13 00:31:14.440361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.648 [2024-07-13 00:31:14.440383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.648 [2024-07-13 00:31:14.440403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.648 [2024-07-13 00:31:14.440423] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:27.648 [2024-07-13 00:31:14.440455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x790060 (9): Bad file descriptor 00:22:27.648 [2024-07-13 00:31:14.441077] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:27.648 [2024-07-13 00:31:14.441137] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:27.648 00:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:27.648 00:31:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:27.648 00:31:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:28.584 00:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.584 00:31:15 -- common/autotest_common.sh@10 -- # set +x 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:28.584 00:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:28.584 00:31:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.584 00:31:15 -- common/autotest_common.sh@10 -- # set +x 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:28.584 00:31:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:28.584 00:31:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:29.517 [2024-07-13 00:31:16.450263] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:29.517 [2024-07-13 00:31:16.450305] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:29.517 [2024-07-13 00:31:16.450339] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:29.517 [2024-07-13 00:31:16.536371] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:29.517 [2024-07-13 00:31:16.591390] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:29.517 [2024-07-13 00:31:16.591461] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:29.517 [2024-07-13 00:31:16.591483] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:29.517 [2024-07-13 00:31:16.591497] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:29.517 [2024-07-13 00:31:16.591506] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:29.517 [2024-07-13 00:31:16.598589] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x79cb00 was disconnected and freed. delete nvme_qpair. 00:22:29.517 00:31:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:29.517 00:31:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.517 00:31:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:29.517 00:31:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.517 00:31:16 -- common/autotest_common.sh@10 -- # set +x 00:22:29.517 00:31:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:29.517 00:31:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:29.517 00:31:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.517 00:31:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:29.517 00:31:16 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:29.517 00:31:16 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96304 00:22:29.517 00:31:16 -- common/autotest_common.sh@926 -- # '[' -z 96304 ']' 00:22:29.517 00:31:16 -- common/autotest_common.sh@930 -- # kill -0 96304 00:22:29.517 00:31:16 -- common/autotest_common.sh@931 -- # uname 00:22:29.517 00:31:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:29.517 00:31:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96304 00:22:29.517 killing process with pid 96304 00:22:29.517 00:31:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:29.517 00:31:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:29.517 00:31:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96304' 00:22:29.517 00:31:16 -- common/autotest_common.sh@945 -- # kill 96304 00:22:29.517 00:31:16 -- common/autotest_common.sh@950 -- # wait 96304 00:22:29.775 00:31:16 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:29.775 00:31:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:29.775 00:31:16 -- nvmf/common.sh@116 -- # sync 00:22:29.775 00:31:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:29.775 00:31:16 -- nvmf/common.sh@119 -- # set +e 00:22:29.775 00:31:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:29.775 00:31:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:29.775 rmmod nvme_tcp 00:22:29.775 rmmod nvme_fabrics 00:22:29.775 rmmod nvme_keyring 00:22:29.775 00:31:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:29.775 00:31:16 -- nvmf/common.sh@123 -- # set -e 00:22:29.775 00:31:16 -- nvmf/common.sh@124 -- # return 0 00:22:29.775 00:31:16 -- nvmf/common.sh@477 -- # '[' -n 96248 ']' 00:22:29.775 00:31:16 -- nvmf/common.sh@478 -- # killprocess 96248 00:22:29.775 00:31:16 -- common/autotest_common.sh@926 -- # '[' -z 96248 ']' 00:22:29.775 00:31:16 -- common/autotest_common.sh@930 -- # kill -0 96248 00:22:29.775 00:31:16 -- common/autotest_common.sh@931 -- # uname 00:22:29.775 00:31:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:29.775 00:31:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96248 00:22:30.032 killing process with pid 96248 00:22:30.032 00:31:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:30.032 00:31:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:30.032 00:31:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96248' 00:22:30.032 00:31:17 -- common/autotest_common.sh@945 -- # kill 96248 00:22:30.032 00:31:17 -- common/autotest_common.sh@950 -- # wait 96248 00:22:30.291 00:31:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:30.291 00:31:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:30.291 00:31:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:30.291 00:31:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.291 00:31:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:30.291 00:31:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.291 00:31:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.291 00:31:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.291 00:31:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:30.291 00:22:30.291 real 0m14.350s 00:22:30.291 user 0m24.593s 00:22:30.291 sys 0m1.608s 00:22:30.291 00:31:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.291 ************************************ 00:22:30.291 END TEST nvmf_discovery_remove_ifc 00:22:30.291 ************************************ 00:22:30.291 00:31:17 -- common/autotest_common.sh@10 -- # set +x 00:22:30.291 00:31:17 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:30.291 00:31:17 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:30.291 00:31:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:30.291 00:31:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:30.291 00:31:17 -- common/autotest_common.sh@10 -- # set +x 00:22:30.291 ************************************ 00:22:30.291 START TEST nvmf_digest 00:22:30.291 ************************************ 00:22:30.291 00:31:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:30.291 * Looking for test storage... 00:22:30.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:30.291 00:31:17 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:30.291 00:31:17 -- nvmf/common.sh@7 -- # uname -s 00:22:30.291 00:31:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.291 00:31:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.291 00:31:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.291 00:31:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.291 00:31:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.291 00:31:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.291 00:31:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.291 00:31:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.291 00:31:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.291 00:31:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.291 00:31:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:22:30.291 00:31:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:22:30.291 00:31:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.291 00:31:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.291 00:31:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:30.291 00:31:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:30.291 00:31:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.291 00:31:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.291 00:31:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.291 00:31:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.291 00:31:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.292 00:31:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.292 00:31:17 -- paths/export.sh@5 -- # export PATH 00:22:30.292 00:31:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.292 00:31:17 -- nvmf/common.sh@46 -- # : 0 00:22:30.292 00:31:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:30.292 00:31:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:30.292 00:31:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:30.292 00:31:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.292 00:31:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.292 00:31:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:30.292 00:31:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:30.292 00:31:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:30.292 00:31:17 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:30.292 00:31:17 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:30.292 00:31:17 -- host/digest.sh@16 -- # runtime=2 00:22:30.292 00:31:17 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:30.292 00:31:17 -- host/digest.sh@132 -- # nvmftestinit 00:22:30.292 00:31:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:30.292 00:31:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.292 00:31:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:30.292 00:31:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:30.292 00:31:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:30.292 00:31:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.292 00:31:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.292 00:31:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.292 00:31:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:30.292 00:31:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:30.292 00:31:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:30.292 00:31:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:30.292 00:31:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:30.292 00:31:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:30.292 00:31:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.292 00:31:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.292 00:31:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:30.292 00:31:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:30.292 00:31:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:30.292 00:31:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:30.292 00:31:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:30.292 00:31:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.292 00:31:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:30.292 00:31:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:30.292 00:31:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:30.292 00:31:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:30.292 00:31:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:30.550 00:31:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:30.550 Cannot find device "nvmf_tgt_br" 00:22:30.550 00:31:17 -- nvmf/common.sh@154 -- # true 00:22:30.550 00:31:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:30.550 Cannot find device "nvmf_tgt_br2" 00:22:30.550 00:31:17 -- nvmf/common.sh@155 -- # true 00:22:30.550 00:31:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:30.550 00:31:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:30.550 Cannot find device "nvmf_tgt_br" 00:22:30.550 00:31:17 -- nvmf/common.sh@157 -- # true 00:22:30.550 00:31:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:30.550 Cannot find device "nvmf_tgt_br2" 00:22:30.550 00:31:17 -- nvmf/common.sh@158 -- # true 00:22:30.550 00:31:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:30.550 00:31:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:30.550 00:31:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:30.550 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:30.550 00:31:17 -- nvmf/common.sh@161 -- # true 00:22:30.550 00:31:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:30.550 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:30.550 00:31:17 -- nvmf/common.sh@162 -- # true 00:22:30.550 00:31:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:30.550 00:31:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:30.550 00:31:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:30.550 00:31:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:30.550 00:31:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:30.550 00:31:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:30.550 00:31:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:30.550 00:31:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:30.550 00:31:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:30.550 00:31:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:30.550 00:31:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:30.550 00:31:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:30.550 00:31:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:30.550 00:31:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:30.550 00:31:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:30.550 00:31:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:30.550 00:31:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:30.550 00:31:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:30.550 00:31:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:30.808 00:31:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:30.809 00:31:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:30.809 00:31:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:30.809 00:31:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:30.809 00:31:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:30.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:22:30.809 00:22:30.809 --- 10.0.0.2 ping statistics --- 00:22:30.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.809 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:22:30.809 00:31:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:30.809 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:30.809 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:22:30.809 00:22:30.809 --- 10.0.0.3 ping statistics --- 00:22:30.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.809 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:22:30.809 00:31:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:30.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:30.809 00:22:30.809 --- 10.0.0.1 ping statistics --- 00:22:30.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.809 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:30.809 00:31:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.809 00:31:17 -- nvmf/common.sh@421 -- # return 0 00:22:30.809 00:31:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:30.809 00:31:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.809 00:31:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:30.809 00:31:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:30.809 00:31:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.809 00:31:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:30.809 00:31:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:30.809 00:31:17 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:30.809 00:31:17 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:30.809 00:31:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:30.809 00:31:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:30.809 00:31:17 -- common/autotest_common.sh@10 -- # set +x 00:22:30.809 ************************************ 00:22:30.809 START TEST nvmf_digest_clean 00:22:30.809 ************************************ 00:22:30.809 00:31:17 -- common/autotest_common.sh@1104 -- # run_digest 00:22:30.809 00:31:17 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:30.809 00:31:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:30.809 00:31:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:30.809 00:31:17 -- common/autotest_common.sh@10 -- # set +x 00:22:30.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.809 00:31:17 -- nvmf/common.sh@469 -- # nvmfpid=96712 00:22:30.809 00:31:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:30.809 00:31:17 -- nvmf/common.sh@470 -- # waitforlisten 96712 00:22:30.809 00:31:17 -- common/autotest_common.sh@819 -- # '[' -z 96712 ']' 00:22:30.809 00:31:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.809 00:31:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:30.809 00:31:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.809 00:31:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:30.809 00:31:17 -- common/autotest_common.sh@10 -- # set +x 00:22:30.809 [2024-07-13 00:31:17.941639] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:30.809 [2024-07-13 00:31:17.941986] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.067 [2024-07-13 00:31:18.085527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.067 [2024-07-13 00:31:18.172534] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:31.067 [2024-07-13 00:31:18.172731] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.067 [2024-07-13 00:31:18.172746] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.067 [2024-07-13 00:31:18.172755] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.067 [2024-07-13 00:31:18.172789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.004 00:31:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:32.004 00:31:18 -- common/autotest_common.sh@852 -- # return 0 00:22:32.004 00:31:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:32.004 00:31:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:32.004 00:31:18 -- common/autotest_common.sh@10 -- # set +x 00:22:32.004 00:31:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.004 00:31:18 -- host/digest.sh@120 -- # common_target_config 00:22:32.004 00:31:18 -- host/digest.sh@43 -- # rpc_cmd 00:22:32.004 00:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.004 00:31:18 -- common/autotest_common.sh@10 -- # set +x 00:22:32.004 null0 00:22:32.004 [2024-07-13 00:31:19.063384] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.004 [2024-07-13 00:31:19.087483] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:32.004 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.004 00:31:19 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:32.004 00:31:19 -- host/digest.sh@77 -- # local rw bs qd 00:22:32.004 00:31:19 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:32.004 00:31:19 -- host/digest.sh@80 -- # rw=randread 00:22:32.004 00:31:19 -- host/digest.sh@80 -- # bs=4096 00:22:32.004 00:31:19 -- host/digest.sh@80 -- # qd=128 00:22:32.004 00:31:19 -- host/digest.sh@82 -- # bperfpid=96766 00:22:32.004 00:31:19 -- host/digest.sh@83 -- # waitforlisten 96766 /var/tmp/bperf.sock 00:22:32.004 00:31:19 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:32.004 00:31:19 -- common/autotest_common.sh@819 -- # '[' -z 96766 ']' 00:22:32.004 00:31:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:32.004 00:31:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:32.004 00:31:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:32.004 00:31:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:32.004 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:22:32.004 [2024-07-13 00:31:19.139683] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:32.004 [2024-07-13 00:31:19.139946] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96766 ] 00:22:32.263 [2024-07-13 00:31:19.278895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.263 [2024-07-13 00:31:19.384411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.199 00:31:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:33.199 00:31:20 -- common/autotest_common.sh@852 -- # return 0 00:22:33.199 00:31:20 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:33.199 00:31:20 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:33.199 00:31:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:33.458 00:31:20 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.458 00:31:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.717 nvme0n1 00:22:33.717 00:31:20 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:33.717 00:31:20 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:33.717 Running I/O for 2 seconds... 00:22:36.252 00:22:36.252 Latency(us) 00:22:36.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.252 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:36.252 nvme0n1 : 2.01 21630.25 84.49 0.00 0.00 5912.82 2204.39 20614.05 00:22:36.252 =================================================================================================================== 00:22:36.252 Total : 21630.25 84.49 0.00 0.00 5912.82 2204.39 20614.05 00:22:36.252 0 00:22:36.252 00:31:22 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:36.252 00:31:22 -- host/digest.sh@92 -- # get_accel_stats 00:22:36.252 00:31:22 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:36.252 00:31:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:36.252 00:31:22 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:36.252 | select(.opcode=="crc32c") 00:22:36.252 | "\(.module_name) \(.executed)"' 00:22:36.252 00:31:23 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:36.252 00:31:23 -- host/digest.sh@93 -- # exp_module=software 00:22:36.252 00:31:23 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:36.252 00:31:23 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:36.252 00:31:23 -- host/digest.sh@97 -- # killprocess 96766 00:22:36.252 00:31:23 -- common/autotest_common.sh@926 -- # '[' -z 96766 ']' 00:22:36.252 00:31:23 -- common/autotest_common.sh@930 -- # kill -0 96766 00:22:36.252 00:31:23 -- common/autotest_common.sh@931 -- # uname 00:22:36.252 00:31:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:36.252 00:31:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96766 00:22:36.252 00:31:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:36.252 killing process with pid 96766 00:22:36.252 00:31:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:36.252 00:31:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96766' 00:22:36.252 Received shutdown signal, test time was about 2.000000 seconds 00:22:36.252 00:22:36.252 Latency(us) 00:22:36.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.252 =================================================================================================================== 00:22:36.252 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.252 00:31:23 -- common/autotest_common.sh@945 -- # kill 96766 00:22:36.252 00:31:23 -- common/autotest_common.sh@950 -- # wait 96766 00:22:36.252 00:31:23 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:36.252 00:31:23 -- host/digest.sh@77 -- # local rw bs qd 00:22:36.252 00:31:23 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:36.252 00:31:23 -- host/digest.sh@80 -- # rw=randread 00:22:36.252 00:31:23 -- host/digest.sh@80 -- # bs=131072 00:22:36.252 00:31:23 -- host/digest.sh@80 -- # qd=16 00:22:36.252 00:31:23 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:36.252 00:31:23 -- host/digest.sh@82 -- # bperfpid=96852 00:22:36.252 00:31:23 -- host/digest.sh@83 -- # waitforlisten 96852 /var/tmp/bperf.sock 00:22:36.252 00:31:23 -- common/autotest_common.sh@819 -- # '[' -z 96852 ']' 00:22:36.252 00:31:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:36.252 00:31:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:36.252 00:31:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:36.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:36.252 00:31:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:36.252 00:31:23 -- common/autotest_common.sh@10 -- # set +x 00:22:36.510 [2024-07-13 00:31:23.506855] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:36.510 [2024-07-13 00:31:23.506934] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96852 ] 00:22:36.510 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:36.510 Zero copy mechanism will not be used. 00:22:36.510 [2024-07-13 00:31:23.640805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.769 [2024-07-13 00:31:23.762507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.334 00:31:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:37.334 00:31:24 -- common/autotest_common.sh@852 -- # return 0 00:22:37.334 00:31:24 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:37.334 00:31:24 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:37.334 00:31:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:37.898 00:31:24 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.898 00:31:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:38.155 nvme0n1 00:22:38.155 00:31:25 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:38.155 00:31:25 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:38.155 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:38.155 Zero copy mechanism will not be used. 00:22:38.155 Running I/O for 2 seconds... 00:22:40.687 00:22:40.687 Latency(us) 00:22:40.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.687 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:40.687 nvme0n1 : 2.00 9924.42 1240.55 0.00 0.00 1609.08 692.60 6911.07 00:22:40.687 =================================================================================================================== 00:22:40.687 Total : 9924.42 1240.55 0.00 0.00 1609.08 692.60 6911.07 00:22:40.687 0 00:22:40.687 00:31:27 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:40.687 00:31:27 -- host/digest.sh@92 -- # get_accel_stats 00:22:40.687 00:31:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:40.687 00:31:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:40.687 00:31:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:40.687 | select(.opcode=="crc32c") 00:22:40.687 | "\(.module_name) \(.executed)"' 00:22:40.687 00:31:27 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:40.687 00:31:27 -- host/digest.sh@93 -- # exp_module=software 00:22:40.687 00:31:27 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:40.687 00:31:27 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:40.687 00:31:27 -- host/digest.sh@97 -- # killprocess 96852 00:22:40.687 00:31:27 -- common/autotest_common.sh@926 -- # '[' -z 96852 ']' 00:22:40.687 00:31:27 -- common/autotest_common.sh@930 -- # kill -0 96852 00:22:40.687 00:31:27 -- common/autotest_common.sh@931 -- # uname 00:22:40.687 00:31:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:40.687 00:31:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96852 00:22:40.687 00:31:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:40.687 00:31:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:40.687 killing process with pid 96852 00:22:40.687 00:31:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96852' 00:22:40.687 Received shutdown signal, test time was about 2.000000 seconds 00:22:40.687 00:22:40.687 Latency(us) 00:22:40.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.687 =================================================================================================================== 00:22:40.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.687 00:31:27 -- common/autotest_common.sh@945 -- # kill 96852 00:22:40.687 00:31:27 -- common/autotest_common.sh@950 -- # wait 96852 00:22:40.687 00:31:27 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:40.687 00:31:27 -- host/digest.sh@77 -- # local rw bs qd 00:22:40.687 00:31:27 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:40.687 00:31:27 -- host/digest.sh@80 -- # rw=randwrite 00:22:40.687 00:31:27 -- host/digest.sh@80 -- # bs=4096 00:22:40.687 00:31:27 -- host/digest.sh@80 -- # qd=128 00:22:40.687 00:31:27 -- host/digest.sh@82 -- # bperfpid=96942 00:22:40.687 00:31:27 -- host/digest.sh@83 -- # waitforlisten 96942 /var/tmp/bperf.sock 00:22:40.687 00:31:27 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:40.687 00:31:27 -- common/autotest_common.sh@819 -- # '[' -z 96942 ']' 00:22:40.687 00:31:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:40.687 00:31:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:40.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:40.688 00:31:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:40.688 00:31:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:40.688 00:31:27 -- common/autotest_common.sh@10 -- # set +x 00:22:40.946 [2024-07-13 00:31:27.936912] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:40.946 [2024-07-13 00:31:27.937057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96942 ] 00:22:40.946 [2024-07-13 00:31:28.070945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.946 [2024-07-13 00:31:28.174583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.880 00:31:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:41.880 00:31:28 -- common/autotest_common.sh@852 -- # return 0 00:22:41.880 00:31:28 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:41.880 00:31:28 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:41.880 00:31:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:42.139 00:31:29 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:42.139 00:31:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:42.398 nvme0n1 00:22:42.398 00:31:29 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:42.398 00:31:29 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:42.398 Running I/O for 2 seconds... 00:22:44.926 00:22:44.926 Latency(us) 00:22:44.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.926 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:44.926 nvme0n1 : 2.00 27115.43 105.92 0.00 0.00 4715.26 1899.05 8817.57 00:22:44.926 =================================================================================================================== 00:22:44.926 Total : 27115.43 105.92 0.00 0.00 4715.26 1899.05 8817.57 00:22:44.926 0 00:22:44.926 00:31:31 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:44.926 00:31:31 -- host/digest.sh@92 -- # get_accel_stats 00:22:44.926 00:31:31 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:44.926 00:31:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:44.926 00:31:31 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:44.926 | select(.opcode=="crc32c") 00:22:44.926 | "\(.module_name) \(.executed)"' 00:22:44.926 00:31:31 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:44.926 00:31:31 -- host/digest.sh@93 -- # exp_module=software 00:22:44.926 00:31:31 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:44.926 00:31:31 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:44.926 00:31:31 -- host/digest.sh@97 -- # killprocess 96942 00:22:44.926 00:31:31 -- common/autotest_common.sh@926 -- # '[' -z 96942 ']' 00:22:44.926 00:31:31 -- common/autotest_common.sh@930 -- # kill -0 96942 00:22:44.926 00:31:31 -- common/autotest_common.sh@931 -- # uname 00:22:44.926 00:31:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:44.926 00:31:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96942 00:22:44.926 00:31:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:44.926 00:31:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:44.926 killing process with pid 96942 00:22:44.926 00:31:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96942' 00:22:44.926 Received shutdown signal, test time was about 2.000000 seconds 00:22:44.926 00:22:44.926 Latency(us) 00:22:44.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.926 =================================================================================================================== 00:22:44.926 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.926 00:31:31 -- common/autotest_common.sh@945 -- # kill 96942 00:22:44.926 00:31:31 -- common/autotest_common.sh@950 -- # wait 96942 00:22:45.185 00:31:32 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:45.185 00:31:32 -- host/digest.sh@77 -- # local rw bs qd 00:22:45.185 00:31:32 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:45.185 00:31:32 -- host/digest.sh@80 -- # rw=randwrite 00:22:45.185 00:31:32 -- host/digest.sh@80 -- # bs=131072 00:22:45.185 00:31:32 -- host/digest.sh@80 -- # qd=16 00:22:45.185 00:31:32 -- host/digest.sh@82 -- # bperfpid=97034 00:22:45.185 00:31:32 -- host/digest.sh@83 -- # waitforlisten 97034 /var/tmp/bperf.sock 00:22:45.185 00:31:32 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:45.185 00:31:32 -- common/autotest_common.sh@819 -- # '[' -z 97034 ']' 00:22:45.185 00:31:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:45.185 00:31:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:45.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:45.185 00:31:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:45.185 00:31:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:45.185 00:31:32 -- common/autotest_common.sh@10 -- # set +x 00:22:45.185 [2024-07-13 00:31:32.217292] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:45.185 [2024-07-13 00:31:32.217433] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97034 ] 00:22:45.185 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:45.185 Zero copy mechanism will not be used. 00:22:45.185 [2024-07-13 00:31:32.359890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.443 [2024-07-13 00:31:32.446647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.009 00:31:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:46.009 00:31:33 -- common/autotest_common.sh@852 -- # return 0 00:22:46.009 00:31:33 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:46.009 00:31:33 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:46.009 00:31:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:46.267 00:31:33 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:46.267 00:31:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:46.525 nvme0n1 00:22:46.783 00:31:33 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:46.783 00:31:33 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:46.783 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:46.783 Zero copy mechanism will not be used. 00:22:46.783 Running I/O for 2 seconds... 00:22:48.682 00:22:48.682 Latency(us) 00:22:48.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.682 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:48.682 nvme0n1 : 2.00 8414.63 1051.83 0.00 0.00 1897.40 1601.16 8162.21 00:22:48.682 =================================================================================================================== 00:22:48.682 Total : 8414.63 1051.83 0.00 0.00 1897.40 1601.16 8162.21 00:22:48.682 0 00:22:48.941 00:31:35 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:48.941 00:31:35 -- host/digest.sh@92 -- # get_accel_stats 00:22:48.941 00:31:35 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:48.941 00:31:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:48.941 00:31:35 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:48.941 | select(.opcode=="crc32c") 00:22:48.941 | "\(.module_name) \(.executed)"' 00:22:49.199 00:31:36 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:49.199 00:31:36 -- host/digest.sh@93 -- # exp_module=software 00:22:49.199 00:31:36 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:49.199 00:31:36 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:49.199 00:31:36 -- host/digest.sh@97 -- # killprocess 97034 00:22:49.199 00:31:36 -- common/autotest_common.sh@926 -- # '[' -z 97034 ']' 00:22:49.199 00:31:36 -- common/autotest_common.sh@930 -- # kill -0 97034 00:22:49.199 00:31:36 -- common/autotest_common.sh@931 -- # uname 00:22:49.199 00:31:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:49.199 00:31:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97034 00:22:49.199 00:31:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:49.199 00:31:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:49.199 00:31:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97034' 00:22:49.199 killing process with pid 97034 00:22:49.199 00:31:36 -- common/autotest_common.sh@945 -- # kill 97034 00:22:49.199 Received shutdown signal, test time was about 2.000000 seconds 00:22:49.199 00:22:49.199 Latency(us) 00:22:49.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.199 =================================================================================================================== 00:22:49.199 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:49.199 00:31:36 -- common/autotest_common.sh@950 -- # wait 97034 00:22:49.458 00:31:36 -- host/digest.sh@126 -- # killprocess 96712 00:22:49.459 00:31:36 -- common/autotest_common.sh@926 -- # '[' -z 96712 ']' 00:22:49.459 00:31:36 -- common/autotest_common.sh@930 -- # kill -0 96712 00:22:49.459 00:31:36 -- common/autotest_common.sh@931 -- # uname 00:22:49.459 00:31:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:49.459 00:31:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96712 00:22:49.459 00:31:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:49.459 00:31:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:49.459 killing process with pid 96712 00:22:49.459 00:31:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96712' 00:22:49.459 00:31:36 -- common/autotest_common.sh@945 -- # kill 96712 00:22:49.459 00:31:36 -- common/autotest_common.sh@950 -- # wait 96712 00:22:49.717 00:22:49.717 real 0m18.914s 00:22:49.717 user 0m35.657s 00:22:49.717 sys 0m4.868s 00:22:49.717 00:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:49.717 00:31:36 -- common/autotest_common.sh@10 -- # set +x 00:22:49.717 ************************************ 00:22:49.717 END TEST nvmf_digest_clean 00:22:49.717 ************************************ 00:22:49.717 00:31:36 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:49.717 00:31:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:49.717 00:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:49.717 00:31:36 -- common/autotest_common.sh@10 -- # set +x 00:22:49.717 ************************************ 00:22:49.717 START TEST nvmf_digest_error 00:22:49.717 ************************************ 00:22:49.717 00:31:36 -- common/autotest_common.sh@1104 -- # run_digest_error 00:22:49.717 00:31:36 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:49.717 00:31:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:49.717 00:31:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:49.717 00:31:36 -- common/autotest_common.sh@10 -- # set +x 00:22:49.717 00:31:36 -- nvmf/common.sh@469 -- # nvmfpid=97147 00:22:49.717 00:31:36 -- nvmf/common.sh@470 -- # waitforlisten 97147 00:22:49.717 00:31:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:49.717 00:31:36 -- common/autotest_common.sh@819 -- # '[' -z 97147 ']' 00:22:49.717 00:31:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.717 00:31:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:49.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.717 00:31:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.717 00:31:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:49.717 00:31:36 -- common/autotest_common.sh@10 -- # set +x 00:22:49.717 [2024-07-13 00:31:36.891730] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:49.717 [2024-07-13 00:31:36.891799] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.976 [2024-07-13 00:31:37.024404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.976 [2024-07-13 00:31:37.106988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:49.976 [2024-07-13 00:31:37.107132] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.976 [2024-07-13 00:31:37.107145] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.976 [2024-07-13 00:31:37.107153] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.976 [2024-07-13 00:31:37.107186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.912 00:31:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:50.912 00:31:37 -- common/autotest_common.sh@852 -- # return 0 00:22:50.912 00:31:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:50.912 00:31:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:50.912 00:31:37 -- common/autotest_common.sh@10 -- # set +x 00:22:50.912 00:31:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.912 00:31:37 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:50.912 00:31:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.912 00:31:37 -- common/autotest_common.sh@10 -- # set +x 00:22:50.912 [2024-07-13 00:31:37.887774] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:50.912 00:31:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.912 00:31:37 -- host/digest.sh@104 -- # common_target_config 00:22:50.912 00:31:37 -- host/digest.sh@43 -- # rpc_cmd 00:22:50.912 00:31:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.912 00:31:37 -- common/autotest_common.sh@10 -- # set +x 00:22:50.912 null0 00:22:50.912 [2024-07-13 00:31:38.023438] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.912 [2024-07-13 00:31:38.047598] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.912 00:31:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.912 00:31:38 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:50.912 00:31:38 -- host/digest.sh@54 -- # local rw bs qd 00:22:50.912 00:31:38 -- host/digest.sh@56 -- # rw=randread 00:22:50.912 00:31:38 -- host/digest.sh@56 -- # bs=4096 00:22:50.912 00:31:38 -- host/digest.sh@56 -- # qd=128 00:22:50.912 00:31:38 -- host/digest.sh@58 -- # bperfpid=97191 00:22:50.912 00:31:38 -- host/digest.sh@60 -- # waitforlisten 97191 /var/tmp/bperf.sock 00:22:50.912 00:31:38 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:50.912 00:31:38 -- common/autotest_common.sh@819 -- # '[' -z 97191 ']' 00:22:50.912 00:31:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:50.912 00:31:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:50.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:50.912 00:31:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:50.912 00:31:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:50.912 00:31:38 -- common/autotest_common.sh@10 -- # set +x 00:22:50.912 [2024-07-13 00:31:38.100878] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:50.912 [2024-07-13 00:31:38.100973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97191 ] 00:22:51.171 [2024-07-13 00:31:38.237752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.171 [2024-07-13 00:31:38.334285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.105 00:31:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:52.105 00:31:39 -- common/autotest_common.sh@852 -- # return 0 00:22:52.105 00:31:39 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:52.105 00:31:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:52.105 00:31:39 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:52.105 00:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.105 00:31:39 -- common/autotest_common.sh@10 -- # set +x 00:22:52.105 00:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.105 00:31:39 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:52.105 00:31:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:52.364 nvme0n1 00:22:52.364 00:31:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:52.364 00:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.364 00:31:39 -- common/autotest_common.sh@10 -- # set +x 00:22:52.364 00:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.364 00:31:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:52.364 00:31:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:52.623 Running I/O for 2 seconds... 00:22:52.623 [2024-07-13 00:31:39.728423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.623 [2024-07-13 00:31:39.728476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.623 [2024-07-13 00:31:39.728489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.623 [2024-07-13 00:31:39.741783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.623 [2024-07-13 00:31:39.741829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.623 [2024-07-13 00:31:39.741841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.623 [2024-07-13 00:31:39.754923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.623 [2024-07-13 00:31:39.754968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.623 [2024-07-13 00:31:39.754980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.623 [2024-07-13 00:31:39.767574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.623 [2024-07-13 00:31:39.767606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.623 [2024-07-13 00:31:39.767628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.623 [2024-07-13 00:31:39.780356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.623 [2024-07-13 00:31:39.780390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.623 [2024-07-13 00:31:39.780402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.623 [2024-07-13 00:31:39.791893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.623 [2024-07-13 00:31:39.791937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.623 [2024-07-13 00:31:39.791949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.623 [2024-07-13 00:31:39.801363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.623 [2024-07-13 00:31:39.801396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.623 [2024-07-13 00:31:39.801407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.623 [2024-07-13 00:31:39.812019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.623 [2024-07-13 00:31:39.812064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.623 [2024-07-13 00:31:39.812076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.623 [2024-07-13 00:31:39.823239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.623 [2024-07-13 00:31:39.823272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.623 [2024-07-13 00:31:39.823284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.623 [2024-07-13 00:31:39.832973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.623 [2024-07-13 00:31:39.833017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.623 [2024-07-13 00:31:39.833041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.623 [2024-07-13 00:31:39.843936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.623 [2024-07-13 00:31:39.843968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.623 [2024-07-13 00:31:39.843980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.853953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.854002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.854014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.864741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.864777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.864790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.876853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.876900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.876912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.890050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.890096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.890107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.901342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.901375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.901388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.910489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.910534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.910546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.922581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.922624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.922638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.932467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.932500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.932512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.943558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.943592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.943603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.953436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.953471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.953483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.962761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.962794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.962806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.972021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.972054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.972065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.985209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.985243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.985255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:39.997268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:39.997302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:39.997313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.883 [2024-07-13 00:31:40.008483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.883 [2024-07-13 00:31:40.008520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.883 [2024-07-13 00:31:40.008533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.884 [2024-07-13 00:31:40.019456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.884 [2024-07-13 00:31:40.019491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.884 [2024-07-13 00:31:40.019503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.884 [2024-07-13 00:31:40.030202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.884 [2024-07-13 00:31:40.030238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.884 [2024-07-13 00:31:40.030250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.884 [2024-07-13 00:31:40.043097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.884 [2024-07-13 00:31:40.043131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.884 [2024-07-13 00:31:40.043143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.884 [2024-07-13 00:31:40.053140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.884 [2024-07-13 00:31:40.053173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.884 [2024-07-13 00:31:40.053185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.884 [2024-07-13 00:31:40.065078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.884 [2024-07-13 00:31:40.065128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.884 [2024-07-13 00:31:40.065150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.884 [2024-07-13 00:31:40.076533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.884 [2024-07-13 00:31:40.076567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.884 [2024-07-13 00:31:40.076579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.884 [2024-07-13 00:31:40.086861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.884 [2024-07-13 00:31:40.086894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.884 [2024-07-13 00:31:40.086905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.884 [2024-07-13 00:31:40.094953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.884 [2024-07-13 00:31:40.094986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.884 [2024-07-13 00:31:40.094997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.884 [2024-07-13 00:31:40.107367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:52.884 [2024-07-13 00:31:40.107400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.884 [2024-07-13 00:31:40.107413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.143 [2024-07-13 00:31:40.121775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.143 [2024-07-13 00:31:40.121808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.143 [2024-07-13 00:31:40.121820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.143 [2024-07-13 00:31:40.134116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.143 [2024-07-13 00:31:40.134149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.143 [2024-07-13 00:31:40.134161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.143 [2024-07-13 00:31:40.144405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.143 [2024-07-13 00:31:40.144439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.143 [2024-07-13 00:31:40.144450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.143 [2024-07-13 00:31:40.157264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.143 [2024-07-13 00:31:40.157298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.157310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.170773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.170807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.170818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.182230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.182263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.182275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.192238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.192283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.192309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.205423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.205467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.205478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.218288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.218334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.218345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.230965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.231014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.231026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.243363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.243396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.243407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.255572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.255606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.255627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.265971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.266004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.266015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.277022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.277066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.277093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.290530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.290572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.290584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.302296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.302328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.302339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.312426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.312459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.312471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.324449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.324483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.324494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.336101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.336134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.336146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.348908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.348943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.348955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.357669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.357701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.357712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.144 [2024-07-13 00:31:40.370176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.144 [2024-07-13 00:31:40.370208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.144 [2024-07-13 00:31:40.370220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.383841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.383873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.383884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.394705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.394736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.394748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.405362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.405395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.405406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.419930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.419964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.419986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.432077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.432110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.432122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.444248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.444280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.444291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.456212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.456246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.456257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.465154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.465186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.465198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.476001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.476034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.476046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.487629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.487660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.487671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.497652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.497694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.497706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.506170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.506202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.506214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.516078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.516110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.516121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.525553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.525589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.525601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.535421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.535454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.535465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.546855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.546886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.546898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.557466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.557499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.557510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.568494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.568527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.568539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.577967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.578000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.578012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.588371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.588404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.588416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.601549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.601583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.601595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.610628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.610660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.610671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.621514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.621546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.621557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.404 [2024-07-13 00:31:40.631472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.404 [2024-07-13 00:31:40.631535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.404 [2024-07-13 00:31:40.631556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.664 [2024-07-13 00:31:40.641515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.664 [2024-07-13 00:31:40.641547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.664 [2024-07-13 00:31:40.641558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.664 [2024-07-13 00:31:40.652684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.664 [2024-07-13 00:31:40.652735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.664 [2024-07-13 00:31:40.652748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.664 [2024-07-13 00:31:40.662990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.664 [2024-07-13 00:31:40.663023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.664 [2024-07-13 00:31:40.663034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.664 [2024-07-13 00:31:40.672353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.664 [2024-07-13 00:31:40.672385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.664 [2024-07-13 00:31:40.672397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.664 [2024-07-13 00:31:40.681963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.664 [2024-07-13 00:31:40.681995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.664 [2024-07-13 00:31:40.682007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.664 [2024-07-13 00:31:40.690825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.690857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.690868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.702526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.702558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.702570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.714338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.714373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.714385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.724151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.724185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.724196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.736399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.736434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.736446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.749299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.749333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.749344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.762595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.762637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.762650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.774362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.774395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.774406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.787546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.787578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.787589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.798011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.798043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.798054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.807338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.807370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.807382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.819751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.819784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.819795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.833098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.833131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.833143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.844761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.844794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.844806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.857760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.857793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.857805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.869684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.869716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.869728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.881224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.881257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.881268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.665 [2024-07-13 00:31:40.891291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.665 [2024-07-13 00:31:40.891325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.665 [2024-07-13 00:31:40.891337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:40.904369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:40.904404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:40.904415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:40.918241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:40.918287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:40.918299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:40.928356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:40.928389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:40.928401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:40.940942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:40.940991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:40.941019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:40.953316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:40.953362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:40.953373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:40.963512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:40.963558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:40.963569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:40.974353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:40.974386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:40.974398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:40.986300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:40.986333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:40.986345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:40.995447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:40.995493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:40.995504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:41.007719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:41.007752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:41.007767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:41.024892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:41.024939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:41.024951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:41.035361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:41.035393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:41.035405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:41.045057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:41.045108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:41.045120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:41.057577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:41.057610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:41.057635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:41.070881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:41.070938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:41.070950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:41.083878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:41.083938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:41.083950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:41.096699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:41.096737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:41.096749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:41.108131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:41.108173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:41.108185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:41.119602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:41.119644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.925 [2024-07-13 00:31:41.119656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.925 [2024-07-13 00:31:41.129177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.925 [2024-07-13 00:31:41.129209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.926 [2024-07-13 00:31:41.129221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.926 [2024-07-13 00:31:41.141431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.926 [2024-07-13 00:31:41.141464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.926 [2024-07-13 00:31:41.141485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.926 [2024-07-13 00:31:41.151202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:53.926 [2024-07-13 00:31:41.151237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.926 [2024-07-13 00:31:41.151249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.164116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.164150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.164161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.176479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.176512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.176523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.187644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.187675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.187686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.197098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.197141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.197153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.205870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.205902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.205913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.218704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.218737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.218750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.230783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.230816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.230837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.240984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.241027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.241039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.253020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.253054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.253065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.261261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.261305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.261326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.274005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.274047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.274059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.287847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.287880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.287895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.299995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.300027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.300039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.312212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.312258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.312269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.325407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.325440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.325453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.335284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.185 [2024-07-13 00:31:41.335315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.185 [2024-07-13 00:31:41.335327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.185 [2024-07-13 00:31:41.346622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.186 [2024-07-13 00:31:41.346653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.186 [2024-07-13 00:31:41.346664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.186 [2024-07-13 00:31:41.357605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.186 [2024-07-13 00:31:41.357647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.186 [2024-07-13 00:31:41.357659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.186 [2024-07-13 00:31:41.370378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.186 [2024-07-13 00:31:41.370411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.186 [2024-07-13 00:31:41.370423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.186 [2024-07-13 00:31:41.382606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.186 [2024-07-13 00:31:41.382660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.186 [2024-07-13 00:31:41.382672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.186 [2024-07-13 00:31:41.395414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.186 [2024-07-13 00:31:41.395447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.186 [2024-07-13 00:31:41.395458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.186 [2024-07-13 00:31:41.404218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.186 [2024-07-13 00:31:41.404263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.186 [2024-07-13 00:31:41.404274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.455 [2024-07-13 00:31:41.417441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.455 [2024-07-13 00:31:41.417475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.455 [2024-07-13 00:31:41.417488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.455 [2024-07-13 00:31:41.428996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.455 [2024-07-13 00:31:41.429040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.455 [2024-07-13 00:31:41.429051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.455 [2024-07-13 00:31:41.439398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.439443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.439455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.449763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.449797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.449811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.459792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.459825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.459837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.469378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.469422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.469445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.482565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.482609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.482634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.493192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.493224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.493236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.504092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.504135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.504158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.515045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.515091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.515102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.524769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.524802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.524813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.533212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.533243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.533255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.544522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.544553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.544565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.555174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.555206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.555217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.565133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.565167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.565179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.577858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.577909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.577922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.589235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.589301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.589325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.601258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.601292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.601304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.612531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.612577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.612589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.625169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.625214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.625225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.634775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.634815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.634826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.645382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.645426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.645438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.658502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.658534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.658546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.456 [2024-07-13 00:31:41.669399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.456 [2024-07-13 00:31:41.669432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.456 [2024-07-13 00:31:41.669443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.726 [2024-07-13 00:31:41.679229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.726 [2024-07-13 00:31:41.679285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.726 [2024-07-13 00:31:41.679299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.726 [2024-07-13 00:31:41.693472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.726 [2024-07-13 00:31:41.693512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.726 [2024-07-13 00:31:41.693525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.726 [2024-07-13 00:31:41.707663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c0580) 00:22:54.726 [2024-07-13 00:31:41.707695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.726 [2024-07-13 00:31:41.707707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.726 00:22:54.726 Latency(us) 00:22:54.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.726 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:54.726 nvme0n1 : 2.00 22313.56 87.16 0.00 0.00 5729.75 2636.33 19065.02 00:22:54.726 =================================================================================================================== 00:22:54.726 Total : 22313.56 87.16 0.00 0.00 5729.75 2636.33 19065.02 00:22:54.726 0 00:22:54.726 00:31:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:54.726 00:31:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:54.726 00:31:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:54.726 00:31:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:54.727 | .driver_specific 00:22:54.727 | .nvme_error 00:22:54.727 | .status_code 00:22:54.727 | .command_transient_transport_error' 00:22:54.727 00:31:41 -- host/digest.sh@71 -- # (( 175 > 0 )) 00:22:54.727 00:31:41 -- host/digest.sh@73 -- # killprocess 97191 00:22:54.727 00:31:41 -- common/autotest_common.sh@926 -- # '[' -z 97191 ']' 00:22:54.727 00:31:41 -- common/autotest_common.sh@930 -- # kill -0 97191 00:22:54.727 00:31:41 -- common/autotest_common.sh@931 -- # uname 00:22:54.727 00:31:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:54.727 00:31:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97191 00:22:54.985 00:31:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:54.985 00:31:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:54.985 killing process with pid 97191 00:22:54.985 00:31:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97191' 00:22:54.985 00:31:41 -- common/autotest_common.sh@945 -- # kill 97191 00:22:54.985 Received shutdown signal, test time was about 2.000000 seconds 00:22:54.985 00:22:54.985 Latency(us) 00:22:54.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.985 =================================================================================================================== 00:22:54.985 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.985 00:31:41 -- common/autotest_common.sh@950 -- # wait 97191 00:22:55.244 00:31:42 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:55.244 00:31:42 -- host/digest.sh@54 -- # local rw bs qd 00:22:55.244 00:31:42 -- host/digest.sh@56 -- # rw=randread 00:22:55.244 00:31:42 -- host/digest.sh@56 -- # bs=131072 00:22:55.244 00:31:42 -- host/digest.sh@56 -- # qd=16 00:22:55.244 00:31:42 -- host/digest.sh@58 -- # bperfpid=97287 00:22:55.244 00:31:42 -- host/digest.sh@60 -- # waitforlisten 97287 /var/tmp/bperf.sock 00:22:55.244 00:31:42 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:55.244 00:31:42 -- common/autotest_common.sh@819 -- # '[' -z 97287 ']' 00:22:55.244 00:31:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:55.244 00:31:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:55.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:55.244 00:31:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:55.244 00:31:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:55.244 00:31:42 -- common/autotest_common.sh@10 -- # set +x 00:22:55.244 [2024-07-13 00:31:42.299900] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:55.244 [2024-07-13 00:31:42.300022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97287 ] 00:22:55.244 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:55.244 Zero copy mechanism will not be used. 00:22:55.244 [2024-07-13 00:31:42.440888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.504 [2024-07-13 00:31:42.538425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.075 00:31:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:56.075 00:31:43 -- common/autotest_common.sh@852 -- # return 0 00:22:56.075 00:31:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.075 00:31:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.333 00:31:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:56.333 00:31:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.333 00:31:43 -- common/autotest_common.sh@10 -- # set +x 00:22:56.333 00:31:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.333 00:31:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.333 00:31:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.591 nvme0n1 00:22:56.591 00:31:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:56.591 00:31:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.591 00:31:43 -- common/autotest_common.sh@10 -- # set +x 00:22:56.591 00:31:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.591 00:31:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:56.591 00:31:43 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:56.851 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:56.851 Zero copy mechanism will not be used. 00:22:56.851 Running I/O for 2 seconds... 00:22:56.851 [2024-07-13 00:31:43.933506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.933577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.933599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.937014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.937050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.937063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.940416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.940450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.940461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.943519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.943553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.943565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.947323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.947357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.947368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.950818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.950851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.950862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.954147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.954181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.954192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.956872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.956907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.956920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.959896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.959928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.959940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.962919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.962952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.962964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.965851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.965884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.965895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.969192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.969226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.969237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.972222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.972255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.972267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.975632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.975663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.975675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.978393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.978426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.978437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.981833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.981866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.981877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.985176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.985209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.985221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.988317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.988348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.988360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.990537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.990569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.990580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.994255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.994287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.994298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:43.997343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:43.997375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:43.997387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:44.000207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:44.000238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:44.000249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:44.003388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:44.003422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:44.003433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:44.007043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:44.007077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:44.007089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:44.010325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:44.010359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:44.010371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:44.013297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:44.013330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.851 [2024-07-13 00:31:44.013342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.851 [2024-07-13 00:31:44.016386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.851 [2024-07-13 00:31:44.016417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.016428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.019506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.019540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.019551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.023446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.023480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.023491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.026216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.026248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.026259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.029205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.029239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.029249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.032551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.032583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.032594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.035645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.035678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.035690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.038778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.038812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.038823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.042174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.042206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.042218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.045319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.045351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.045362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.048204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.048236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.048247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.051685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.051718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.051729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.054591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.054635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.054647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.058140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.058173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.058184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.061162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.061196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.061208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.064101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.064143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.064154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.067305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.067339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.067350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.070234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.070267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.070278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.073700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.073731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.073742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:56.852 [2024-07-13 00:31:44.077307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:56.852 [2024-07-13 00:31:44.077339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.852 [2024-07-13 00:31:44.077351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.080225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.080259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.080271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.083729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.083786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.083799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.087755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.087790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.087801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.091353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.091386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.091398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.094303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.094337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.094348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.097827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.097860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.097871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.100703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.100738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.100761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.103755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.103788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.103800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.107358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.107391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.107402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.110749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.110781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.110793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.113799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.113831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.113842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.116960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.116994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.117005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.120124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.120155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.120167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.123336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.123367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.123378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.126503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.126535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.126546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.129986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.130017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.130028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.133271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.133303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.133314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.135887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.135920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.135932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.138936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.138969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.138981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.142960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.142994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.143005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.147088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.147120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.147131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.150261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.150294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.150305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.154042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.154075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.154086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.157339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.157372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.157384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.160877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.114 [2024-07-13 00:31:44.160912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.114 [2024-07-13 00:31:44.160924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.114 [2024-07-13 00:31:44.164063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.164096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.164107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.167366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.167399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.167410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.171000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.171033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.171044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.174086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.174118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.174129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.177029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.177062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.177090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.180295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.180327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.180339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.183335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.183369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.183380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.186768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.186802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.186813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.190155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.190188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.190199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.193654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.193685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.193697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.197075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.197124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.197135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.200503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.200536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.200547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.203714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.203747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.203759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.207200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.207232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.207244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.210441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.210474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.210485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.213396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.213428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.213439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.216753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.216786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.216798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.219924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.219957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.219968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.223374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.223407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.223418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.226456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.226487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.226498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.229607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.229650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.229661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.233041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.233089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.233101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.236083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.236114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.236126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.239369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.239401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.239412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.242778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.242809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.115 [2024-07-13 00:31:44.242820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.115 [2024-07-13 00:31:44.245248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.115 [2024-07-13 00:31:44.245278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.245290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.248081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.248111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.248122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.252099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.252131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.252142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.255314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.255347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.255358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.258736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.258768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.258780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.262199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.262232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.262243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.265516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.265548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.265559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.268677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.268724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.268736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.272531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.272563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.272574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.275106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.275149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.275161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.278256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.278300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.278312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.281733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.281764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.281788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.284526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.284569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.284579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.287912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.287944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.287956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.291269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.291302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.291313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.294588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.294642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.294660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.297592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.297633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.297645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.301050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.301084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.301111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.304404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.304436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.304447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.307656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.307689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.307700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.310719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.310748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.310760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.313998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.314029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.314040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.316203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.316233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.316244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.319103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.319135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.319147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.322733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.322765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.322777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.325670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.325710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.325722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.329496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.329529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.329540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.332201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.332232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.332243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.335389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.335421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.116 [2024-07-13 00:31:44.335432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.116 [2024-07-13 00:31:44.339090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.116 [2024-07-13 00:31:44.339124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.117 [2024-07-13 00:31:44.339136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.343007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.343040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.343052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.346489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.346521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.346532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.350104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.350138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.350149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.353486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.353518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.353529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.356483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.356514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.356525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.359876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.359908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.359920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.362837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.362866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.362879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.365590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.365633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.365645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.368897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.368933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.368955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.371898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.371948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.371961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.375345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.375377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.375388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.378809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.378842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.378853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.382007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.382040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.382051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.377 [2024-07-13 00:31:44.385017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.377 [2024-07-13 00:31:44.385049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.377 [2024-07-13 00:31:44.385060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.388187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.388218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.388229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.391178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.391222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.391233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.394901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.394934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.394945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.397686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.397716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.397727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.400926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.400961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.400973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.404117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.404148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.404160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.407291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.407323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.407334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.410567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.410599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.410610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.413394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.413426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.413437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.416595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.416637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.416665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.419941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.419974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.419986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.422779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.422811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.422822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.425925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.425957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.425968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.429147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.429178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.429190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.432248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.432279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.432291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.434805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.434837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.434849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.438086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.438116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.438127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.441532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.441563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.441574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.445289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.445321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.445333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.447984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.448016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.448027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.451373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.451405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.451417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.454437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.454481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.454492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.458264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.458309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.458320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.461561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.461594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.461605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.464526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.464558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.464569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.467438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.467470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.467481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.470308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.470339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.470350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.473568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.473599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.378 [2024-07-13 00:31:44.473610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.378 [2024-07-13 00:31:44.476409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.378 [2024-07-13 00:31:44.476458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.476470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.479980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.480013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.480024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.483215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.483248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.483259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.486582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.486623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.486636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.490251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.490282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.490294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.493801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.493833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.493844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.497267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.497311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.497322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.500599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.500639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.500687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.504010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.504042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.504053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.507038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.507069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.507081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.509983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.510015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.510026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.513007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.513041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.513053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.515875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.515914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.515925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.518936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.518968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.518979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.522085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.522117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.522129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.525771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.525803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.525815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.529386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.529418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.529429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.532402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.532433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.532444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.535219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.535250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.535261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.538056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.538089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.538100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.541689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.541731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.541746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.545346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.545389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.545400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.548843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.548877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.548889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.552252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.552283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.552298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.555688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.555720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.555732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.558718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.558750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.558765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.562118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.562150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.562163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.564901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.564935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.564947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.568009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.568041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.379 [2024-07-13 00:31:44.568053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.379 [2024-07-13 00:31:44.571057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.379 [2024-07-13 00:31:44.571089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-07-13 00:31:44.571104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.380 [2024-07-13 00:31:44.574273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.380 [2024-07-13 00:31:44.574306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-07-13 00:31:44.574317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.380 [2024-07-13 00:31:44.577751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.380 [2024-07-13 00:31:44.577784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-07-13 00:31:44.577795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.380 [2024-07-13 00:31:44.580952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.380 [2024-07-13 00:31:44.580987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-07-13 00:31:44.581000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.380 [2024-07-13 00:31:44.584162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.380 [2024-07-13 00:31:44.584206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-07-13 00:31:44.584218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.380 [2024-07-13 00:31:44.588044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.380 [2024-07-13 00:31:44.588090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-07-13 00:31:44.588101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.380 [2024-07-13 00:31:44.591490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.380 [2024-07-13 00:31:44.591522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-07-13 00:31:44.591534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.380 [2024-07-13 00:31:44.595267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.380 [2024-07-13 00:31:44.595300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-07-13 00:31:44.595311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.380 [2024-07-13 00:31:44.598856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.380 [2024-07-13 00:31:44.598889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-07-13 00:31:44.598901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.380 [2024-07-13 00:31:44.602039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.380 [2024-07-13 00:31:44.602085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.380 [2024-07-13 00:31:44.602097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.605709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.605744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.605756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.609176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.609209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.609220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.612511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.612544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.612556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.615974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.616008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.616030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.619400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.619443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.619454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.622971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.623014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.623026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.625870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.625902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.625913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.629541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.629574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.629585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.632705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.632737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.632760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.635710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.635742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.635753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.638671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.638714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.638726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.641900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.641933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.641944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.645388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.645420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.645431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.648909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.648943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.648955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.652088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.652118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.652130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.655093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.655125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.655136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.658318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.658350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.658361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.661398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.661428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.661438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.664412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.664444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.664455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.667455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.667498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.667509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.671038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.671070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.671082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.674128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.674170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.674181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.677432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.677481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.677493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.641 [2024-07-13 00:31:44.680953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.641 [2024-07-13 00:31:44.680993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.641 [2024-07-13 00:31:44.681010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.684242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.684272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.684295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.686732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.686764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.686775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.690374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.690406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.690417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.693675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.693707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.693719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.696935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.696969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.696980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.700427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.700460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.700472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.703547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.703580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.703590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.706406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.706438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.706449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.709194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.709226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.709237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.712618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.712681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.712719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.716051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.716084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.716095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.719372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.719404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.719415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.722614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.722658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.722670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.726030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.726060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.726070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.728918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.728950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.728962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.731966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.731997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.732024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.735368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.735400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.735411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.738291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.738322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.738333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.741425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.741453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.741465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.744833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.744867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.744878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.748712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.748745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.748757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.751817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.751850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.751862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.754808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.754840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.754852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.757877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.757921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.757932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.761381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.761413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.761425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.764732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.764765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.764778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.767980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.768028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.768039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.771803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.771836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.642 [2024-07-13 00:31:44.771847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.642 [2024-07-13 00:31:44.775188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.642 [2024-07-13 00:31:44.775220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.775232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.778562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.778595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.778606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.782077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.782108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.782119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.785148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.785180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.785192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.788069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.788100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.788111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.790627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.790657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.790668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.794247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.794279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.794291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.797799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.797831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.797842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.800686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.800719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.800731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.804233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.804265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.804276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.807781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.807814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.807825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.811374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.811406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.811417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.814905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.814937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.814948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.818055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.818086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.818097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.820972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.821004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.821016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.823942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.823975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.823997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.827110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.827142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.827154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.830360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.830391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.830403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.833425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.833457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.833468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.836784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.836817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.836829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.840238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.840270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.840282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.843914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.843947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.843958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.847825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.847857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.847879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.851084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.851117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.851128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.854387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.854418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.854430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.857140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.857173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.857184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.860499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.860532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.860543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.863301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.863333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.863344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.643 [2024-07-13 00:31:44.866869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.643 [2024-07-13 00:31:44.866910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.643 [2024-07-13 00:31:44.866922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.870295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.870330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.870342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.873788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.873819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.873830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.877564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.877599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.877621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.881130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.881165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.881177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.884282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.884315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.884326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.887669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.887701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.887713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.891180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.891215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.891227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.894827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.894862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.894874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.898198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.898231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.898243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.901716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.901749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.901761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.905159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.905192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.905204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.908562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.908596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.908608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.912295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.912329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.912341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.915558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.915591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.915622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.918167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.918201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.918213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.904 [2024-07-13 00:31:44.921154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.904 [2024-07-13 00:31:44.921188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.904 [2024-07-13 00:31:44.921199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.925149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.925183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.925194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.928170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.928202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.928213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.931012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.931045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.931057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.934306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.934339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.934362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.937786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.937821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.937832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.940605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.940647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.940683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.944230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.944264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.944276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.947766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.947798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.947810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.950810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.950843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.950855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.954101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.954135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.954146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.957150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.957182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.957193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.960226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.960259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.960270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.963960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.964159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.964176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.967758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.967942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.968089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.970438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.970603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.970633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.974443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.974478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.974490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.977706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.977739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.977751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.980884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.980920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.980933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.984183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.984215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.984226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.987629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.987662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.987674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.991464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.991497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.991509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.994294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.994327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.994339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:44.998048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:44.998081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:44.998093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:45.001132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:45.001166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:45.001179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:45.004497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:45.004529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:45.004541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:45.007802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:45.007835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:45.007847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:45.010951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:45.010985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:45.010996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:45.014695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:45.014728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:45.014740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.905 [2024-07-13 00:31:45.017772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.905 [2024-07-13 00:31:45.017806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.905 [2024-07-13 00:31:45.017818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.021214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.021246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.021258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.024049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.024081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.024092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.027348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.027382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.027394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.030809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.030838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.030849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.033685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.033712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.033723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.036283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.036313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.036323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.039638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.039682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.039694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.042798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.042827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.042838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.045696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.045736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.045747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.048883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.048912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.048924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.051801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.051830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.051841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.055166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.055195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.055206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.058814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.058842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.058853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.061748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.061776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.061787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.065078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.065107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.065134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.068231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.068260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.068271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.071500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.071528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.071539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.074549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.074578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.074588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.077629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.077684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.077696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.080358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.080387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.080397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.083775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.083805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.083816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.087340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.087370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.087380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.090252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.090281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.090292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.093913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.093941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.093952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.097051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.097096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.097107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.100462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.100490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.100502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.103484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.103513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.103524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.107010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.107040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.107051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.906 [2024-07-13 00:31:45.110662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.906 [2024-07-13 00:31:45.110702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.906 [2024-07-13 00:31:45.110713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.907 [2024-07-13 00:31:45.113854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.907 [2024-07-13 00:31:45.113883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.907 [2024-07-13 00:31:45.113894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.907 [2024-07-13 00:31:45.117141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.907 [2024-07-13 00:31:45.117171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.907 [2024-07-13 00:31:45.117182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.907 [2024-07-13 00:31:45.120211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.907 [2024-07-13 00:31:45.120239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.907 [2024-07-13 00:31:45.120250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.907 [2024-07-13 00:31:45.123474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.907 [2024-07-13 00:31:45.123503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.907 [2024-07-13 00:31:45.123514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:57.907 [2024-07-13 00:31:45.126692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.907 [2024-07-13 00:31:45.126720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.907 [2024-07-13 00:31:45.126731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:57.907 [2024-07-13 00:31:45.130052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:57.907 [2024-07-13 00:31:45.130083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.907 [2024-07-13 00:31:45.130097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.183 [2024-07-13 00:31:45.133676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.183 [2024-07-13 00:31:45.133716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.183 [2024-07-13 00:31:45.133727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.136897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.136929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.136941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.140446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.140476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.140488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.143136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.143165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.143176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.146226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.146255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.146266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.149895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.149923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.149933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.153135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.153162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.153172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.156448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.156475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.156487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.159546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.159575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.159585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.163472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.163501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.163512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.166303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.166333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.166344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.170042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.170073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.170085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.173407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.173450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.173461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.176748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.176782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.176794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.180160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.180190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.180208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.183236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.183265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.183276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.186914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.186944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.186956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.189784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.189813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.189824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.193033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.193063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.193075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.196271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.196300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.196311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.199676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.199705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.199716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.202501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.202530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.202547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.206041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.206071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.206088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.209402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.209431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.209448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.212500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.212530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.212545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.215631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.215660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.215677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.218880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.184 [2024-07-13 00:31:45.218909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.184 [2024-07-13 00:31:45.218920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.184 [2024-07-13 00:31:45.221753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.221782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.221797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.225080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.225132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.225143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.228425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.228452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.228468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.231793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.231822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.231840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.235071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.235101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.235116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.238297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.238327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.238338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.241791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.241821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.241837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.245146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.245174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.245184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.248254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.248283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.248295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.251622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.251666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.251680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.255095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.255125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.255136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.257928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.257957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.257969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.261730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.261759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.261775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.264310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.264339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.264355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.268006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.268047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.268059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.271731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.271777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.271796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.274898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.274927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.274943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.278144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.278173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.278184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.281329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.281359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.281370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.284891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.284929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.284941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.287925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.287953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.287969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.291227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.291256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.291274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.293862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.293892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.293903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.297344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.297372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.297384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.300857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.300885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.185 [2024-07-13 00:31:45.300897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.185 [2024-07-13 00:31:45.303942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.185 [2024-07-13 00:31:45.303971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.303982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.307526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.307555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.307567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.310882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.310913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.310928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.313592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.313660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.313672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.317800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.317828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.317841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.321145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.321175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.321189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.324182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.324210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.324221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.327745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.327775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.327786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.331411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.331442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.331453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.334933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.334961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.334972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.337918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.337947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.337958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.341077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.341122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.341133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.344497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.344524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.344535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.347322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.347350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.347361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.350884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.350913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.350923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.354268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.354298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.354309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.357289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.357318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.357333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.360851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.360881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.360892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.364340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.364369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.364380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.367954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.367985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.368000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.370756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.370786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.370803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.374092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.374121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.374133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.376986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.377034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.377052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.380553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.380583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.380598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.383847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.383876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.383892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.387863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.387891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.387902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.391285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.391314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.391331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.394261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.394290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.394305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.397394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.397423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.186 [2024-07-13 00:31:45.397439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.186 [2024-07-13 00:31:45.400742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.186 [2024-07-13 00:31:45.400772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.187 [2024-07-13 00:31:45.400789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.187 [2024-07-13 00:31:45.404319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.187 [2024-07-13 00:31:45.404348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.187 [2024-07-13 00:31:45.404362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.187 [2024-07-13 00:31:45.407777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.187 [2024-07-13 00:31:45.407807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.187 [2024-07-13 00:31:45.407825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.447 [2024-07-13 00:31:45.411516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.447 [2024-07-13 00:31:45.411552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.447 [2024-07-13 00:31:45.411564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.447 [2024-07-13 00:31:45.415258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.447 [2024-07-13 00:31:45.415292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.447 [2024-07-13 00:31:45.415303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.447 [2024-07-13 00:31:45.418913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.447 [2024-07-13 00:31:45.418952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.447 [2024-07-13 00:31:45.418964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.447 [2024-07-13 00:31:45.422731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.422762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.422778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.426412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.426442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.426459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.430217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.430246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.430256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.432989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.433042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.433053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.436625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.436701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.436714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.440348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.440377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.440392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.443847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.443878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.443889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.446811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.446840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.446854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.450139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.450170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.450181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.453469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.453497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.453508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.456869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.456901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.456913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.460384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.460416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.460429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.463851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.463881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.463893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.467549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.467578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.467593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.470571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.470599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.470610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.474387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.474417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.474428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.477835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.477865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.477880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.481586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.481624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.481636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.484866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.484895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.484907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.487405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.487433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.487449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.490825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.490854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.490865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.493287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.493316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.493328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.496699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.496729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.496740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.499932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.499962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.499978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.503434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.503463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.503474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.506386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.506424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.506435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.510093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.448 [2024-07-13 00:31:45.510123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.448 [2024-07-13 00:31:45.510141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.448 [2024-07-13 00:31:45.513521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.513550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.513561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.516893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.516932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.516945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.520512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.520543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.520559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.523854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.523883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.523895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.527405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.527433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.527449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.530186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.530216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.530232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.533474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.533503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.533520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.536585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.536634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.536646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.539857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.539886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.539897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.543440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.543469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.543488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.546524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.546552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.546563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.549702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.549729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.549745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.552738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.552767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.552784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.556116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.556144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.556155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.559730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.559757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.559771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.562856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.562884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.562895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.565373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.565401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.565412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.568288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.568316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.568327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.571469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.571498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.571509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.574805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.574833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.574845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.577888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.577918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.577929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.581233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.581263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.581274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.584107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.584135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.584146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.587482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.587512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.587522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.591104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.591134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.591148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.594294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.594323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.594335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.597634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.597672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.597684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.600808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.600836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.600847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.449 [2024-07-13 00:31:45.604465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.449 [2024-07-13 00:31:45.604495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.449 [2024-07-13 00:31:45.604507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.607886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.607916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.607927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.611025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.611053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.611065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.613671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.613698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.613709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.616881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.616908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.616920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.620195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.620225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.620242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.623451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.623480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.623491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.626587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.626625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.626637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.630105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.630134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.630146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.633689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.633717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.633731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.636390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.636419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.636430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.639725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.639753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.639766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.642358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.642411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.642422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.645540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.645569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.645580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.648863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.648893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.648904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.651663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.651689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.651700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.655192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.655221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.655232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.658490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.658520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.658531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.662080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.662109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.662120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.665355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.665384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.665395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.668718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.668747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.668759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.450 [2024-07-13 00:31:45.672429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.450 [2024-07-13 00:31:45.672475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.450 [2024-07-13 00:31:45.672486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.676048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.676094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.676105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.679741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.679770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.679781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.683584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.683628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.683642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.687225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.687255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.687266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.690926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.690956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.690967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.694326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.694355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.694366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.696791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.696820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.696832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.700267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.700294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.700306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.703650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.703678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.703699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.706865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.706893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.706903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.709873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.709902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.709913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.713375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.713404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.713415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.716462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.716492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.716503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.719490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.719520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.719531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.722770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.722799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.722810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.711 [2024-07-13 00:31:45.726178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.711 [2024-07-13 00:31:45.726207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.711 [2024-07-13 00:31:45.726218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.729586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.729625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.729638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.732781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.732812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.732823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.736363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.736394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.736405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.739803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.739850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.739863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.743519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.743549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.743560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.747106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.747135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.747146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.750424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.750453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.750464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.753468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.753496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.753507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.756043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.756070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.756081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.759036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.759065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.759076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.762385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.762414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.762426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.766092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.766121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.766132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.769422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.769451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.769463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.773429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.773459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.773469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.776450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.776480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.776495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.780008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.780041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.780054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.783568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.783598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.783610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.787093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.787122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.787134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.790470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.790498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.790511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.794163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.794192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.794206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.797859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.797889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.797900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.801398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.801428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.801439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.804909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.804939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.804950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.808130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.808158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.808171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.811495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.811524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.811535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.814684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.814712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.814729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.817713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.817740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.817752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.820984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.821045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.712 [2024-07-13 00:31:45.821056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.712 [2024-07-13 00:31:45.824245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.712 [2024-07-13 00:31:45.824274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.824284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.827276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.827306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.827317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.830772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.830801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.830811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.834212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.834241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.834254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.837629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.837673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.837685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.841159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.841187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.841199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.843887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.843924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.843941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.847197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.847227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.847237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.850602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.850640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.850652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.854084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.854112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.854123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.856846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.856875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.856886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.859575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.859603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.859624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.862511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.862538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.862549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.865596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.865633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.865644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.868423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.868451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.868468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.871489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.871518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.871529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.874867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.874895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.874906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.878397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.878427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.878444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.882146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.882176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.882187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.884976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.885006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.885018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.888118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.888145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.888156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.891138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.891166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.891177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.894067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.894096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.894106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.897294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.897322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.897335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.900534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.900564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.900580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.903455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.903484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.903495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.906378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.906406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.906419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.909126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.909154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.909164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.912589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.912631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.713 [2024-07-13 00:31:45.912644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.713 [2024-07-13 00:31:45.915809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.713 [2024-07-13 00:31:45.915838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.714 [2024-07-13 00:31:45.915848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.714 [2024-07-13 00:31:45.918810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.714 [2024-07-13 00:31:45.918838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.714 [2024-07-13 00:31:45.918849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.714 [2024-07-13 00:31:45.922375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.714 [2024-07-13 00:31:45.922403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.714 [2024-07-13 00:31:45.922414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.714 [2024-07-13 00:31:45.925700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fd46f0) 00:22:58.714 [2024-07-13 00:31:45.925729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.714 [2024-07-13 00:31:45.925742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.714 00:22:58.714 Latency(us) 00:22:58.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.714 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:58.714 nvme0n1 : 2.00 9447.74 1180.97 0.00 0.00 1690.69 484.07 5332.25 00:22:58.714 =================================================================================================================== 00:22:58.714 Total : 9447.74 1180.97 0.00 0.00 1690.69 484.07 5332.25 00:22:58.714 0 00:22:58.973 00:31:45 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:58.973 00:31:45 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:58.973 00:31:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:58.973 00:31:45 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:58.973 | .driver_specific 00:22:58.973 | .nvme_error 00:22:58.973 | .status_code 00:22:58.973 | .command_transient_transport_error' 00:22:58.973 00:31:46 -- host/digest.sh@71 -- # (( 609 > 0 )) 00:22:58.973 00:31:46 -- host/digest.sh@73 -- # killprocess 97287 00:22:58.973 00:31:46 -- common/autotest_common.sh@926 -- # '[' -z 97287 ']' 00:22:58.973 00:31:46 -- common/autotest_common.sh@930 -- # kill -0 97287 00:22:59.232 00:31:46 -- common/autotest_common.sh@931 -- # uname 00:22:59.233 00:31:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:59.233 00:31:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97287 00:22:59.233 killing process with pid 97287 00:22:59.233 Received shutdown signal, test time was about 2.000000 seconds 00:22:59.233 00:22:59.233 Latency(us) 00:22:59.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.233 =================================================================================================================== 00:22:59.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.233 00:31:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:59.233 00:31:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:59.233 00:31:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97287' 00:22:59.233 00:31:46 -- common/autotest_common.sh@945 -- # kill 97287 00:22:59.233 00:31:46 -- common/autotest_common.sh@950 -- # wait 97287 00:22:59.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:59.491 00:31:46 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:59.491 00:31:46 -- host/digest.sh@54 -- # local rw bs qd 00:22:59.491 00:31:46 -- host/digest.sh@56 -- # rw=randwrite 00:22:59.491 00:31:46 -- host/digest.sh@56 -- # bs=4096 00:22:59.491 00:31:46 -- host/digest.sh@56 -- # qd=128 00:22:59.491 00:31:46 -- host/digest.sh@58 -- # bperfpid=97373 00:22:59.491 00:31:46 -- host/digest.sh@60 -- # waitforlisten 97373 /var/tmp/bperf.sock 00:22:59.491 00:31:46 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:59.491 00:31:46 -- common/autotest_common.sh@819 -- # '[' -z 97373 ']' 00:22:59.491 00:31:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:59.491 00:31:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:59.491 00:31:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:59.491 00:31:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:59.491 00:31:46 -- common/autotest_common.sh@10 -- # set +x 00:22:59.491 [2024-07-13 00:31:46.557106] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:59.491 [2024-07-13 00:31:46.557217] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97373 ] 00:22:59.491 [2024-07-13 00:31:46.692966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.751 [2024-07-13 00:31:46.806069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.319 00:31:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:00.319 00:31:47 -- common/autotest_common.sh@852 -- # return 0 00:23:00.319 00:31:47 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:00.319 00:31:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:00.577 00:31:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:00.577 00:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.577 00:31:47 -- common/autotest_common.sh@10 -- # set +x 00:23:00.577 00:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.577 00:31:47 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:00.577 00:31:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:00.836 nvme0n1 00:23:00.836 00:31:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:00.836 00:31:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.836 00:31:48 -- common/autotest_common.sh@10 -- # set +x 00:23:00.836 00:31:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.836 00:31:48 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:00.836 00:31:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:01.095 Running I/O for 2 seconds... 00:23:01.095 [2024-07-13 00:31:48.155830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eea00 00:23:01.095 [2024-07-13 00:31:48.156976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.095 [2024-07-13 00:31:48.157032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:01.095 [2024-07-13 00:31:48.167027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f4f40 00:23:01.095 [2024-07-13 00:31:48.167787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.095 [2024-07-13 00:31:48.167816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:01.095 [2024-07-13 00:31:48.176505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ecc78 00:23:01.095 [2024-07-13 00:31:48.177556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.095 [2024-07-13 00:31:48.177584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.186172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f1430 00:23:01.096 [2024-07-13 00:31:48.187622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.187648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.195441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eb760 00:23:01.096 [2024-07-13 00:31:48.196978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.197004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.204814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f0350 00:23:01.096 [2024-07-13 00:31:48.206372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.206415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.214954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190efae0 00:23:01.096 [2024-07-13 00:31:48.216448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.216476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.224739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e3d08 00:23:01.096 [2024-07-13 00:31:48.226210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.226236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.234388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f0bc0 00:23:01.096 [2024-07-13 00:31:48.235818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.235845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.244247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ebfd0 00:23:01.096 [2024-07-13 00:31:48.245879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.245906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.254286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f1ca0 00:23:01.096 [2024-07-13 00:31:48.255480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.255507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.264834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f20d8 00:23:01.096 [2024-07-13 00:31:48.265990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.266025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.272745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190df118 00:23:01.096 [2024-07-13 00:31:48.273521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.273547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.282475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eee38 00:23:01.096 [2024-07-13 00:31:48.283296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.283323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.292708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e73e0 00:23:01.096 [2024-07-13 00:31:48.293235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.293261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.302076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e2c28 00:23:01.096 [2024-07-13 00:31:48.302577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.302603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.311386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e8088 00:23:01.096 [2024-07-13 00:31:48.311897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.311922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:01.096 [2024-07-13 00:31:48.321153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f4b08 00:23:01.096 [2024-07-13 00:31:48.321712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.096 [2024-07-13 00:31:48.321750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:01.355 [2024-07-13 00:31:48.329940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e84c0 00:23:01.355 [2024-07-13 00:31:48.330904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.355 [2024-07-13 00:31:48.330933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.341038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eff18 00:23:01.356 [2024-07-13 00:31:48.342199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.342231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.350655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f31b8 00:23:01.356 [2024-07-13 00:31:48.351124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.351150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.360166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f5be8 00:23:01.356 [2024-07-13 00:31:48.360682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.360727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.370184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ebfd0 00:23:01.356 [2024-07-13 00:31:48.370860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.370887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.379951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f1868 00:23:01.356 [2024-07-13 00:31:48.380421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.380445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.390839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f2948 00:23:01.356 [2024-07-13 00:31:48.391840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.391865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.397791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f2510 00:23:01.356 [2024-07-13 00:31:48.397975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.397993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.409051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fd640 00:23:01.356 [2024-07-13 00:31:48.410513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.410539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.418214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f92c0 00:23:01.356 [2024-07-13 00:31:48.418838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.418865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.426440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e5ec8 00:23:01.356 [2024-07-13 00:31:48.426690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.426709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.437425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f7538 00:23:01.356 [2024-07-13 00:31:48.438117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.438142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.446444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f0ff8 00:23:01.356 [2024-07-13 00:31:48.447496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.447522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.456148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190df550 00:23:01.356 [2024-07-13 00:31:48.456554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.456576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.465881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f31b8 00:23:01.356 [2024-07-13 00:31:48.466405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.466432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.475369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e49b0 00:23:01.356 [2024-07-13 00:31:48.475678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.475701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.484958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e23b8 00:23:01.356 [2024-07-13 00:31:48.485286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.485310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.494411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e73e0 00:23:01.356 [2024-07-13 00:31:48.494683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.494707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.504037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e0ea0 00:23:01.356 [2024-07-13 00:31:48.504261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.504285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.513547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f2d80 00:23:01.356 [2024-07-13 00:31:48.513764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.513788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.522994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e9168 00:23:01.356 [2024-07-13 00:31:48.523222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.523246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.534354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f3e60 00:23:01.356 [2024-07-13 00:31:48.535187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.535217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.544184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f7970 00:23:01.356 [2024-07-13 00:31:48.544917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.544946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.553654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190df550 00:23:01.356 [2024-07-13 00:31:48.555142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.555169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.563814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190df550 00:23:01.356 [2024-07-13 00:31:48.564975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.565003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.571585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e49b0 00:23:01.356 [2024-07-13 00:31:48.572382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.572409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:01.356 [2024-07-13 00:31:48.583159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f92c0 00:23:01.356 [2024-07-13 00:31:48.584186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.356 [2024-07-13 00:31:48.584214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.590869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fe720 00:23:01.616 [2024-07-13 00:31:48.590941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.590960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.601277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ecc78 00:23:01.616 [2024-07-13 00:31:48.601488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.601513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.612598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eea00 00:23:01.616 [2024-07-13 00:31:48.613514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.613540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.621093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fc998 00:23:01.616 [2024-07-13 00:31:48.621825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.621852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.630775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e0630 00:23:01.616 [2024-07-13 00:31:48.631166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.631189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.641032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e8d30 00:23:01.616 [2024-07-13 00:31:48.642115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.642141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.650766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e0a68 00:23:01.616 [2024-07-13 00:31:48.651848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.651874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.660555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e23b8 00:23:01.616 [2024-07-13 00:31:48.661158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.661184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.669239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f6458 00:23:01.616 [2024-07-13 00:31:48.669695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.669731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.678563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f9b30 00:23:01.616 [2024-07-13 00:31:48.679583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.679610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.688893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ee190 00:23:01.616 [2024-07-13 00:31:48.689518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.689542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.698516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f8e88 00:23:01.616 [2024-07-13 00:31:48.699283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.699310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.708710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e5220 00:23:01.616 [2024-07-13 00:31:48.709963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.616 [2024-07-13 00:31:48.709990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:01.616 [2024-07-13 00:31:48.718897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e88f8 00:23:01.616 [2024-07-13 00:31:48.720036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.720062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.726547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fac10 00:23:01.617 [2024-07-13 00:31:48.727039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.727066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.735946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190df118 00:23:01.617 [2024-07-13 00:31:48.736421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.736446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.745807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f7538 00:23:01.617 [2024-07-13 00:31:48.746291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.746326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.755471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e5ec8 00:23:01.617 [2024-07-13 00:31:48.755976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.756004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.765026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e3060 00:23:01.617 [2024-07-13 00:31:48.765473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.765501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.774489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fda78 00:23:01.617 [2024-07-13 00:31:48.774999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.775043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.783926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ec840 00:23:01.617 [2024-07-13 00:31:48.784428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.784453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.793927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eee38 00:23:01.617 [2024-07-13 00:31:48.794268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.794291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.803507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e9e10 00:23:01.617 [2024-07-13 00:31:48.804041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.804078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.812358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e3d08 00:23:01.617 [2024-07-13 00:31:48.813244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.813271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.822530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e5a90 00:23:01.617 [2024-07-13 00:31:48.822932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.822961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.831454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e49b0 00:23:01.617 [2024-07-13 00:31:48.832224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.832250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.617 [2024-07-13 00:31:48.840037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fef90 00:23:01.617 [2024-07-13 00:31:48.840125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.617 [2024-07-13 00:31:48.840144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.852597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ea680 00:23:01.876 [2024-07-13 00:31:48.853339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.853370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.862434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f3a28 00:23:01.876 [2024-07-13 00:31:48.863126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.863168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.871784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190de470 00:23:01.876 [2024-07-13 00:31:48.872796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.872824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.880554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e5658 00:23:01.876 [2024-07-13 00:31:48.881607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.881641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.889921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190de8a8 00:23:01.876 [2024-07-13 00:31:48.890866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.890891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.899187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f6890 00:23:01.876 [2024-07-13 00:31:48.899941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.899967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.909405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190dfdc0 00:23:01.876 [2024-07-13 00:31:48.910111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.910138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.919549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f31b8 00:23:01.876 [2024-07-13 00:31:48.920293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.920319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.927982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e88f8 00:23:01.876 [2024-07-13 00:31:48.928893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.928919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.937288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e5a90 00:23:01.876 [2024-07-13 00:31:48.938729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.938755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.946711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fb048 00:23:01.876 [2024-07-13 00:31:48.947448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.947474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.955442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e6b70 00:23:01.876 [2024-07-13 00:31:48.956219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.956245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.965143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f4b08 00:23:01.876 [2024-07-13 00:31:48.965559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.965585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.975976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e88f8 00:23:01.876 [2024-07-13 00:31:48.977459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.977485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.985482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f0788 00:23:01.876 [2024-07-13 00:31:48.986400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.986426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:01.876 [2024-07-13 00:31:48.995581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190df988 00:23:01.876 [2024-07-13 00:31:48.996281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.876 [2024-07-13 00:31:48.996305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:01.877 [2024-07-13 00:31:49.004922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e0a68 00:23:01.877 [2024-07-13 00:31:49.005613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.877 [2024-07-13 00:31:49.005647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:01.877 [2024-07-13 00:31:49.014207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e1f80 00:23:01.877 [2024-07-13 00:31:49.014950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.877 [2024-07-13 00:31:49.014993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:01.877 [2024-07-13 00:31:49.024226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e4578 00:23:01.877 [2024-07-13 00:31:49.024980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.877 [2024-07-13 00:31:49.025022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:01.877 [2024-07-13 00:31:49.032715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f6cc8 00:23:01.877 [2024-07-13 00:31:49.033588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.877 [2024-07-13 00:31:49.033625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:01.877 [2024-07-13 00:31:49.042190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f6cc8 00:23:01.877 [2024-07-13 00:31:49.043105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.877 [2024-07-13 00:31:49.043130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:01.877 [2024-07-13 00:31:49.051822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f6cc8 00:23:01.877 [2024-07-13 00:31:49.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.877 [2024-07-13 00:31:49.053035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:01.877 [2024-07-13 00:31:49.061547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f6cc8 00:23:01.877 [2024-07-13 00:31:49.062603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.877 [2024-07-13 00:31:49.062638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:01.877 [2024-07-13 00:31:49.070594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f4298 00:23:01.877 [2024-07-13 00:31:49.071636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.877 [2024-07-13 00:31:49.071662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.877 [2024-07-13 00:31:49.080786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f8e88 00:23:01.877 [2024-07-13 00:31:49.081505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.877 [2024-07-13 00:31:49.081531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:01.877 [2024-07-13 00:31:49.091118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e9168 00:23:01.877 [2024-07-13 00:31:49.091877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.877 [2024-07-13 00:31:49.091902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:01.877 [2024-07-13 00:31:49.099888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fcdd0 00:23:01.877 [2024-07-13 00:31:49.101033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.877 [2024-07-13 00:31:49.101058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.110348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eaef0 00:23:02.137 [2024-07-13 00:31:49.110870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.110897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.120113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f0788 00:23:02.137 [2024-07-13 00:31:49.120644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.120696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.129313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f0350 00:23:02.137 [2024-07-13 00:31:49.130151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.130177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.138292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f1430 00:23:02.137 [2024-07-13 00:31:49.138978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.139005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.148899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e0ea0 00:23:02.137 [2024-07-13 00:31:49.149904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.149931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.159306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e84c0 00:23:02.137 [2024-07-13 00:31:49.160592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.160634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.167892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e4578 00:23:02.137 [2024-07-13 00:31:49.168678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.168707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.177329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f9b30 00:23:02.137 [2024-07-13 00:31:49.177795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.177819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.186834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e23b8 00:23:02.137 [2024-07-13 00:31:49.187655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.187687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.196394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f20d8 00:23:02.137 [2024-07-13 00:31:49.196923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.196964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.205998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f31b8 00:23:02.137 [2024-07-13 00:31:49.206793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.206825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.215322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f4f40 00:23:02.137 [2024-07-13 00:31:49.215799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.215822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.224676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e6b70 00:23:02.137 [2024-07-13 00:31:49.225244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.225271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.234746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f2948 00:23:02.137 [2024-07-13 00:31:49.235104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.235143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.242934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ebb98 00:23:02.137 [2024-07-13 00:31:49.243060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.243078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.254269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e1710 00:23:02.137 [2024-07-13 00:31:49.255482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.255510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.263987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190de038 00:23:02.137 [2024-07-13 00:31:49.264771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.264799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.272240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ebb98 00:23:02.137 [2024-07-13 00:31:49.272538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.272561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.284047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e3498 00:23:02.137 [2024-07-13 00:31:49.284940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.284966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.293013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f9f68 00:23:02.137 [2024-07-13 00:31:49.294219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.294245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.302300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f1ca0 00:23:02.137 [2024-07-13 00:31:49.303680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.303705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.313493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e6738 00:23:02.137 [2024-07-13 00:31:49.314504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.314528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.320430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f31b8 00:23:02.137 [2024-07-13 00:31:49.320706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.320732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.332069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f7100 00:23:02.137 [2024-07-13 00:31:49.332892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.332918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.340509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e8088 00:23:02.137 [2024-07-13 00:31:49.341497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.341523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:02.137 [2024-07-13 00:31:49.350042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190dece0 00:23:02.137 [2024-07-13 00:31:49.351443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.137 [2024-07-13 00:31:49.351471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:02.138 [2024-07-13 00:31:49.359520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f9b30 00:23:02.138 [2024-07-13 00:31:49.360326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.138 [2024-07-13 00:31:49.360352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:02.396 [2024-07-13 00:31:49.369842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e95a0 00:23:02.397 [2024-07-13 00:31:49.371600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.371640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.378954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e8d30 00:23:02.397 [2024-07-13 00:31:49.379797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.379825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.389035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e4140 00:23:02.397 [2024-07-13 00:31:49.389675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.389702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.399564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e4140 00:23:02.397 [2024-07-13 00:31:49.400640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.400720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.409072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ed920 00:23:02.397 [2024-07-13 00:31:49.410120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.410145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.418741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e95a0 00:23:02.397 [2024-07-13 00:31:49.419600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.419637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.429247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fe2e8 00:23:02.397 [2024-07-13 00:31:49.430558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.430583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.438636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fef90 00:23:02.397 [2024-07-13 00:31:49.439956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.439982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.448082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f20d8 00:23:02.397 [2024-07-13 00:31:49.449391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.449416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.457505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f0ff8 00:23:02.397 [2024-07-13 00:31:49.458830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.458855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.466896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fbcf0 00:23:02.397 [2024-07-13 00:31:49.468133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.468158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.476535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e4578 00:23:02.397 [2024-07-13 00:31:49.477747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.477773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.487542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f7970 00:23:02.397 [2024-07-13 00:31:49.489013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.489039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.496093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eb328 00:23:02.397 [2024-07-13 00:31:49.497136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.497163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.507013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fa7d8 00:23:02.397 [2024-07-13 00:31:49.508310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.508336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.515138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f35f0 00:23:02.397 [2024-07-13 00:31:49.516019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.516045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.524406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f92c0 00:23:02.397 [2024-07-13 00:31:49.525585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.525628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.534078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e5220 00:23:02.397 [2024-07-13 00:31:49.534958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.534984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.543845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eaab8 00:23:02.397 [2024-07-13 00:31:49.545124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.545155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.553868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fc560 00:23:02.397 [2024-07-13 00:31:49.554963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.554990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.563412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e3d08 00:23:02.397 [2024-07-13 00:31:49.564359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.564385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.573313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ec408 00:23:02.397 [2024-07-13 00:31:49.574308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.574334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.583061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f6020 00:23:02.397 [2024-07-13 00:31:49.583764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.583790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.592557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ed0b0 00:23:02.397 [2024-07-13 00:31:49.593037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.593064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.601959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f7970 00:23:02.397 [2024-07-13 00:31:49.602365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.602397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.611328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ebfd0 00:23:02.397 [2024-07-13 00:31:49.611743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.611767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:02.397 [2024-07-13 00:31:49.620786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fa7d8 00:23:02.397 [2024-07-13 00:31:49.621163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.397 [2024-07-13 00:31:49.621189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.631037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e6300 00:23:02.657 [2024-07-13 00:31:49.631365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.631391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.640728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f0bc0 00:23:02.657 [2024-07-13 00:31:49.641059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.641084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.650312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ed920 00:23:02.657 [2024-07-13 00:31:49.650595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.650627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.661075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ec408 00:23:02.657 [2024-07-13 00:31:49.661922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.661949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.669586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fc128 00:23:02.657 [2024-07-13 00:31:49.670455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.670482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.679532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f7da8 00:23:02.657 [2024-07-13 00:31:49.681173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.681203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.689797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fef90 00:23:02.657 [2024-07-13 00:31:49.690556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.690582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.699324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e5658 00:23:02.657 [2024-07-13 00:31:49.700141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.700167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.708997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eff18 00:23:02.657 [2024-07-13 00:31:49.710317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.710343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.719176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e95a0 00:23:02.657 [2024-07-13 00:31:49.720055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.720081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.728128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fb048 00:23:02.657 [2024-07-13 00:31:49.728305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.728323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.737826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e23b8 00:23:02.657 [2024-07-13 00:31:49.738201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.738229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.750393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190dfdc0 00:23:02.657 [2024-07-13 00:31:49.751563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.751587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.757418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ec408 00:23:02.657 [2024-07-13 00:31:49.757762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.757785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.767699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fa3a0 00:23:02.657 [2024-07-13 00:31:49.768191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.768217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.776856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ecc78 00:23:02.657 [2024-07-13 00:31:49.777328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.777363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.786002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e9e10 00:23:02.657 [2024-07-13 00:31:49.787162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.787189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.795065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f7538 00:23:02.657 [2024-07-13 00:31:49.795154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.795173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.805937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ed920 00:23:02.657 [2024-07-13 00:31:49.806438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.806464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.814931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fcdd0 00:23:02.657 [2024-07-13 00:31:49.815504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.815530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.823288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e0ea0 00:23:02.657 [2024-07-13 00:31:49.823422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.823440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.834276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190edd58 00:23:02.657 [2024-07-13 00:31:49.834833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.834858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.846166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e6738 00:23:02.657 [2024-07-13 00:31:49.847291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.657 [2024-07-13 00:31:49.847316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.657 [2024-07-13 00:31:49.852853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190dece0 00:23:02.657 [2024-07-13 00:31:49.853714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.658 [2024-07-13 00:31:49.853740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:02.658 [2024-07-13 00:31:49.862316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ea680 00:23:02.658 [2024-07-13 00:31:49.862494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.658 [2024-07-13 00:31:49.862513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:02.658 [2024-07-13 00:31:49.871898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e1f80 00:23:02.658 [2024-07-13 00:31:49.872251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.658 [2024-07-13 00:31:49.872276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:02.658 [2024-07-13 00:31:49.883856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f1868 00:23:02.658 [2024-07-13 00:31:49.885091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.658 [2024-07-13 00:31:49.885117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.892904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e7818 00:23:02.917 [2024-07-13 00:31:49.893847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.893874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.901781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e7818 00:23:02.917 [2024-07-13 00:31:49.902811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.902837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.911200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e99d8 00:23:02.917 [2024-07-13 00:31:49.912044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.912070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.920587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eb760 00:23:02.917 [2024-07-13 00:31:49.921685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.921709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.930069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f7da8 00:23:02.917 [2024-07-13 00:31:49.930876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.930907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.939417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e2c28 00:23:02.917 [2024-07-13 00:31:49.940431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.940457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.949187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e5ec8 00:23:02.917 [2024-07-13 00:31:49.950141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.950166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.958538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ff3c8 00:23:02.917 [2024-07-13 00:31:49.959468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.959494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.967976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f96f8 00:23:02.917 [2024-07-13 00:31:49.968937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.968963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.977596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190edd58 00:23:02.917 [2024-07-13 00:31:49.978337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.978365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.987228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e4578 00:23:02.917 [2024-07-13 00:31:49.988144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.988171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:49.996640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190fd208 00:23:02.917 [2024-07-13 00:31:49.997380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:49.997406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:50.007437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eb760 00:23:02.917 [2024-07-13 00:31:50.008474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:50.008502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:50.018006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f7da8 00:23:02.917 [2024-07-13 00:31:50.018926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:50.018952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:50.028976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e3498 00:23:02.917 [2024-07-13 00:31:50.029713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:50.029741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:50.039129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f5378 00:23:02.917 [2024-07-13 00:31:50.039973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:50.039999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:50.049100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e7818 00:23:02.917 [2024-07-13 00:31:50.049753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:50.049780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:50.060002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e7818 00:23:02.917 [2024-07-13 00:31:50.061235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.917 [2024-07-13 00:31:50.061263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:02.917 [2024-07-13 00:31:50.070006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f0ff8 00:23:02.917 [2024-07-13 00:31:50.070923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.918 [2024-07-13 00:31:50.070950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:02.918 [2024-07-13 00:31:50.080901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190de038 00:23:02.918 [2024-07-13 00:31:50.081791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.918 [2024-07-13 00:31:50.081817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:02.918 [2024-07-13 00:31:50.090733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f35f0 00:23:02.918 [2024-07-13 00:31:50.092037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.918 [2024-07-13 00:31:50.092063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:02.918 [2024-07-13 00:31:50.100257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eb328 00:23:02.918 [2024-07-13 00:31:50.101674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.918 [2024-07-13 00:31:50.101731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.918 [2024-07-13 00:31:50.109916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190ec408 00:23:02.918 [2024-07-13 00:31:50.111177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.918 [2024-07-13 00:31:50.111203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:02.918 [2024-07-13 00:31:50.119444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190f6020 00:23:02.918 [2024-07-13 00:31:50.120827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.918 [2024-07-13 00:31:50.120855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:02.918 [2024-07-13 00:31:50.129111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190eea00 00:23:02.918 [2024-07-13 00:31:50.130184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.918 [2024-07-13 00:31:50.130211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:02.918 [2024-07-13 00:31:50.140414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2620) with pdu=0x2000190e9e10 00:23:02.918 [2024-07-13 00:31:50.141632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.918 [2024-07-13 00:31:50.141686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:02.918 00:23:02.918 Latency(us) 00:23:02.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.918 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:02.918 nvme0n1 : 2.00 26291.89 102.70 0.00 0.00 4863.60 1906.50 12868.89 00:23:02.918 =================================================================================================================== 00:23:02.918 Total : 26291.89 102.70 0.00 0.00 4863.60 1906.50 12868.89 00:23:02.918 0 00:23:03.177 00:31:50 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:03.177 00:31:50 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:03.177 00:31:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:03.177 00:31:50 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:03.177 | .driver_specific 00:23:03.177 | .nvme_error 00:23:03.177 | .status_code 00:23:03.177 | .command_transient_transport_error' 00:23:03.435 00:31:50 -- host/digest.sh@71 -- # (( 206 > 0 )) 00:23:03.435 00:31:50 -- host/digest.sh@73 -- # killprocess 97373 00:23:03.435 00:31:50 -- common/autotest_common.sh@926 -- # '[' -z 97373 ']' 00:23:03.435 00:31:50 -- common/autotest_common.sh@930 -- # kill -0 97373 00:23:03.435 00:31:50 -- common/autotest_common.sh@931 -- # uname 00:23:03.435 00:31:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:03.435 00:31:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97373 00:23:03.435 killing process with pid 97373 00:23:03.435 Received shutdown signal, test time was about 2.000000 seconds 00:23:03.435 00:23:03.435 Latency(us) 00:23:03.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.435 =================================================================================================================== 00:23:03.435 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.435 00:31:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:03.435 00:31:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:03.435 00:31:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97373' 00:23:03.435 00:31:50 -- common/autotest_common.sh@945 -- # kill 97373 00:23:03.435 00:31:50 -- common/autotest_common.sh@950 -- # wait 97373 00:23:03.693 00:31:50 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:03.693 00:31:50 -- host/digest.sh@54 -- # local rw bs qd 00:23:03.693 00:31:50 -- host/digest.sh@56 -- # rw=randwrite 00:23:03.693 00:31:50 -- host/digest.sh@56 -- # bs=131072 00:23:03.693 00:31:50 -- host/digest.sh@56 -- # qd=16 00:23:03.693 00:31:50 -- host/digest.sh@58 -- # bperfpid=97463 00:23:03.693 00:31:50 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:03.693 00:31:50 -- host/digest.sh@60 -- # waitforlisten 97463 /var/tmp/bperf.sock 00:23:03.694 00:31:50 -- common/autotest_common.sh@819 -- # '[' -z 97463 ']' 00:23:03.694 00:31:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:03.694 00:31:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:03.694 00:31:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:03.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:03.694 00:31:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:03.694 00:31:50 -- common/autotest_common.sh@10 -- # set +x 00:23:03.694 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:03.694 Zero copy mechanism will not be used. 00:23:03.694 [2024-07-13 00:31:50.741836] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:03.694 [2024-07-13 00:31:50.741920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97463 ] 00:23:03.694 [2024-07-13 00:31:50.873795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.952 [2024-07-13 00:31:50.981899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.519 00:31:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:04.519 00:31:51 -- common/autotest_common.sh@852 -- # return 0 00:23:04.519 00:31:51 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:04.519 00:31:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:04.777 00:31:51 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:04.777 00:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.777 00:31:51 -- common/autotest_common.sh@10 -- # set +x 00:23:04.777 00:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.777 00:31:51 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:04.777 00:31:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.036 nvme0n1 00:23:05.296 00:31:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:05.296 00:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.296 00:31:52 -- common/autotest_common.sh@10 -- # set +x 00:23:05.296 00:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.296 00:31:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:05.296 00:31:52 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:05.296 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:05.296 Zero copy mechanism will not be used. 00:23:05.296 Running I/O for 2 seconds... 00:23:05.296 [2024-07-13 00:31:52.406776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.407021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.407052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.411054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.411179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.411199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.414938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.415079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.415099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.418762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.418859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.418879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.422673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.422748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.422767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.426442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.426516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.426535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.430391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.430498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.430517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.434391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.434590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.434622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.438360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.438549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.438569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.442530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.442667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.442687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.446611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.446714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.446734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.450555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.450663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.296 [2024-07-13 00:31:52.450684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.296 [2024-07-13 00:31:52.454521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.296 [2024-07-13 00:31:52.454638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.454658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.458476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.458587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.458606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.462483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.462591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.462621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.466661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.466855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.466875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.470573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.470774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.470793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.474488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.474598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.474629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.478625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.478740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.478760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.482525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.482604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.482636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.486473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.486568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.486589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.490512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.490634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.490672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.494546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.494665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.494685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.498564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.498755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.498774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.502433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.502572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.502592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.506428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.506535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.506554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.510413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.510489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.510509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.514394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.514478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.514497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.518332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.518424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.518449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.297 [2024-07-13 00:31:52.522516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.297 [2024-07-13 00:31:52.522619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.297 [2024-07-13 00:31:52.522655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.526874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.526979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.527000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.531155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.531332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.531352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.535109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.535275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.535295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.539048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.539153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.539172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.543160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.543263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.543283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.547166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.547240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.547259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.551079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.551154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.551173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.555087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.555190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.555209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.559033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.559141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.559160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.563091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.563271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.563290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.567003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.567193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.567213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.571010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.571153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.571172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.574914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.575025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.575045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.578794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.578875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.578894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.582773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.582863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.582883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.586682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.586807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.586826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.590576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.590687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.590709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.594578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.594768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.558 [2024-07-13 00:31:52.594789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.558 [2024-07-13 00:31:52.598420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.558 [2024-07-13 00:31:52.598650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.598670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.602338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.602487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.602506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.606265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.606342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.606362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.610112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.610205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.610224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.614037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.614141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.614161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.617934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.618061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.618080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.621853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.621972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.621992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.625955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.626132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.626152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.629884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.630052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.630071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.633821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.633971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.633990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.637696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.637772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.637791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.641834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.641907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.641926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.645794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.645880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.645910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.649759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.649907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.649927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.654067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.654203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.654225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.658320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.658498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.658518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.662295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.662464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.662484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.666325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.666476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.666496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.670240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.670315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.670334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.674142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.674216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.674236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.677983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.678074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.678094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.681865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.681991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.682010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.685767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.685876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.685896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.689776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.689954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.689974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.693718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.693899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.693919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.697659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.697786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.697806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.701443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.701534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.701554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.705319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.705411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.705431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.709176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.709279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.559 [2024-07-13 00:31:52.709299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.559 [2024-07-13 00:31:52.713183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.559 [2024-07-13 00:31:52.713311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.713330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.717118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.717230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.717250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.721200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.721377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.721397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.725056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.725306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.725332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.728904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.729075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.729111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.732842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.732935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.732956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.736780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.736873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.736894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.740757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.740849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.740884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.744823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.744976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.744997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.748769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.748958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.748978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.752924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.753176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.753196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.756794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.756966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.756986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.760607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.760860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.760886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.764599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.764725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.764745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.768400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.768483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.768503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.772382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.772470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.772489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.776455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.776584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.776604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.780384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.780559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.780579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.560 [2024-07-13 00:31:52.784570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.560 [2024-07-13 00:31:52.784849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.560 [2024-07-13 00:31:52.784872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.788760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.789031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.789072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.792777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.792950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.792972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.796912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.797023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.797047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.800750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.800835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.800855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.804575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.804728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.804749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.808516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.808676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.808714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.812432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.812539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.812558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.816539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.816770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.816797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.820411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.820602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.820632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.824413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.824559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.824578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.828457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.828541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.828561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.832320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.832405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.832424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.836235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.836309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.836328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.840156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.840281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.840300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.844057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.844181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.844201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.848053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.848229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.848248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.852095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.852271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.852296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.856064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.856193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.856213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.859926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.860020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.860040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.863758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.863833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.863853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.867686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.867760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.867780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.871491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.871625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.871646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.875394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.875513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.875532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.879364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.879549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.879568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.883263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.883472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.883492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.887252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.887396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.887415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.891260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.891345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.820 [2024-07-13 00:31:52.891364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.820 [2024-07-13 00:31:52.895119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.820 [2024-07-13 00:31:52.895205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.895224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.898989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.899061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.899080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.902926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.903055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.903075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.906803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.906906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.906927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.910814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.910991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.911019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.914657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.914820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.914839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.918547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.918711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.918731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.922385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.922470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.922489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.926252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.926344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.926364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.930192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.930284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.930303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.934107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.934232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.934252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.938109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.938263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.938282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.942055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.942230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.942250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.945933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.946095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.946114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.949881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.950033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.950053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.953742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.953819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.953837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.957598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.957698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.957718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.961566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.961680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.961716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.965616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.965765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.965785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.969660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.969767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.969787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.973747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.973921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.973941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.977608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.977833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.977852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.981727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.981872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.981892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.985529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.985622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.985642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.989402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.989476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.989495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.993352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.993425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.993444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:52.997342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:52.997471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:52.997491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:53.001241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:53.001338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:53.001359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:53.005234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:53.005410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:53.005431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:53.009093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:53.009314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:53.009333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:53.012960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:53.013129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.821 [2024-07-13 00:31:53.013149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.821 [2024-07-13 00:31:53.016852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.821 [2024-07-13 00:31:53.016932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.822 [2024-07-13 00:31:53.016952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.822 [2024-07-13 00:31:53.020755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.822 [2024-07-13 00:31:53.020848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.822 [2024-07-13 00:31:53.020868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.822 [2024-07-13 00:31:53.024554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.822 [2024-07-13 00:31:53.024680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.822 [2024-07-13 00:31:53.024700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.822 [2024-07-13 00:31:53.028485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.822 [2024-07-13 00:31:53.028622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.822 [2024-07-13 00:31:53.028642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.822 [2024-07-13 00:31:53.032436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.822 [2024-07-13 00:31:53.032534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.822 [2024-07-13 00:31:53.032554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.822 [2024-07-13 00:31:53.036443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.822 [2024-07-13 00:31:53.036632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.822 [2024-07-13 00:31:53.036658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.822 [2024-07-13 00:31:53.040416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.822 [2024-07-13 00:31:53.040578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.822 [2024-07-13 00:31:53.040597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.822 [2024-07-13 00:31:53.044357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:05.822 [2024-07-13 00:31:53.044509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.822 [2024-07-13 00:31:53.044530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.048859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.048965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.048986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.052933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.053044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.053078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.057165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.057239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.057258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.061138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.061264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.061285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.065044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.065177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.065197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.069218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.069395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.069429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.073144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.073345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.073365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.077143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.077289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.077309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.081044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.081150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.081170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.084935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.085042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.085062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.088888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.088966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.088985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.092725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.092858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.092878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.096542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.096707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.096728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.100567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.100781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.100801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.104307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.104510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.104537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.108307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.108453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.081 [2024-07-13 00:31:53.108473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.081 [2024-07-13 00:31:53.112178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.081 [2024-07-13 00:31:53.112263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.112282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.116090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.116181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.116200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.119935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.120032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.120052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.123929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.124059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.124078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.127923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.128050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.128070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.132010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.132192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.132218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.135977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.136165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.136197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.140012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.140156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.140175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.144078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.144176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.144196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.148014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.148090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.148110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.151993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.152083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.152102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.155992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.156123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.156143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.159986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.160112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.160133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.164029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.164213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.164232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.167969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.168131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.168150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.171944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.172094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.172114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.175836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.175920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.175940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.179719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.179814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.179833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.183625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.183701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.183720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.187566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.187706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.187726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.191441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.191570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.191589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.195397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.195571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.195605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.199271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.199477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.199496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.203097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.203225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.203244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.207007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.207088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.207107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.210976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.211078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.211097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.214878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.214971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.214992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.218767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.218891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.218910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.222752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.222852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.082 [2024-07-13 00:31:53.222871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.082 [2024-07-13 00:31:53.226663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.082 [2024-07-13 00:31:53.226841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.226861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.230524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.230745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.230765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.234476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.234642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.234662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.238393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.238471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.238490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.242374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.242447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.242467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.246280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.246356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.246375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.250299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.250424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.250443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.254316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.254424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.254443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.258309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.258494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.258514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.262162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.262369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.262395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.266204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.266348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.266368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.270081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.270184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.270204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.274016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.274105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.274125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.277928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.278004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.278023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.281834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.281961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.281981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.285697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.285826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.285846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.289695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.289874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.289901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.293576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.293776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.293795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.297534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.297688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.297708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.301354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.301455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.301475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.305246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.305323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.305342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.083 [2024-07-13 00:31:53.309665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.083 [2024-07-13 00:31:53.309809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.083 [2024-07-13 00:31:53.309832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.313919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.314058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.314079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.318108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.318208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.318231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.322159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.322332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.322352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.326152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.326317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.326336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.330262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.330413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.330432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.334264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.334360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.334380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.338162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.338256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.338275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.342091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.342164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.342183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.346437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.346578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.346599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.350742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.350860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.350880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.354771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.354949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.354975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.358675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.358841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.358860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.362587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.362772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.362792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.366528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.366642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.366662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.370544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.370711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.370732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.374505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.374584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.374603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.378488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.378625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.378644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.382414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.382523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.382542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.386560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.386748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.386767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.390475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.390659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.390679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.394475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.394635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.394656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.398432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.398514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.398534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.402375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.402467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.402486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.406395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.406487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.406507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.410386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.410512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.410531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.414272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.414370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.414390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.418238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.418423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.418443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.422151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.422340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.422367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.426153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.426283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.426308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.430098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.430188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.430208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.433978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.434050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.434070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.437887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.437976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.437997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.441888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.442018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.442037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.445795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.445911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.445936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.449819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.449993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.450012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.453687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.453866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.453891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.457638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.457808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.457827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.461489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.461574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.342 [2024-07-13 00:31:53.461593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.342 [2024-07-13 00:31:53.465461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.342 [2024-07-13 00:31:53.465534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.465553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.469508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.469583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.469602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.473523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.473667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.473699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.477595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.477751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.477777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.481907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.482103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.482124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.485987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.486162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.486181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.490148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.490276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.490300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.494216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.494301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.494320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.498359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.498482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.498502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.502436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.502537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.502556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.506534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.506691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.506711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.510567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.510724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.510744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.514770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.514950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.514970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.518683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.518901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.518920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.522597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.522759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.522779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.526538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.526616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.526636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.530509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.530592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.530611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.534397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.534473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.534491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.538309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.538439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.538458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.542462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.542626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.542646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.546523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.546711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.546730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.550460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.550643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.550663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.554379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.554507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.554526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.558226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.558302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.558320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.562103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.562178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.562197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.566075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.566152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.566170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.343 [2024-07-13 00:31:53.570305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.343 [2024-07-13 00:31:53.570462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.343 [2024-07-13 00:31:53.570483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.574477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.574603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.574643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.578725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.578904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.578924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.582641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.582809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.582829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.586619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.586790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.586809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.590639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.590723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.590742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.594434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.594527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.594546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.598367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.598464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.598483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.602287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.602416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.602435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.606264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.606397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.606415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.610225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.610399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.610420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.614073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.614263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.614293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.617910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.618061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.618080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.621849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.621927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.621946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.625776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.625860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.625880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.629605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.629753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.629774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.633579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.633738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.633758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.637476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.637597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.637616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.641527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.641717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.641738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.645319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.645485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.645504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.649288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.649431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.649451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.653233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.653315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.653335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.657133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.657218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.657237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.661000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.661113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.661133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.664865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.664995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.665014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.668708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.668804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.668823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.672635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.672844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.672864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.676459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.676690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.676710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.680402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.680531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.680550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.684311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.684400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.684419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.688265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.688356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.688375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.692135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.692211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.692230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.696014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.696141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.696160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.700012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.700112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.700132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.703972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.704155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.704174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.707878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.708061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.708086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.711749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.711899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.711918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.715658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.715735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.715754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.719524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.719636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.719655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.723452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.723544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.723563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.727404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.727530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.727550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.731395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.731502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.731521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.735435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.735609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.735643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.739478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.739655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.739674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.743551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.743712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.743733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.747555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.747672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.603 [2024-07-13 00:31:53.747692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.603 [2024-07-13 00:31:53.751566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.603 [2024-07-13 00:31:53.751655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.751675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.755515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.755591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.755623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.759528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.759677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.759696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.763443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.763533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.763552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.767386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.767569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.767589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.771346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.771563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.771582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.775289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.775435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.775455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.779328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.779424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.779443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.783456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.783550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.783571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.787567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.787673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.787694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.791902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.792026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.792046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.795803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.795934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.795953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.799907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.800082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.800102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.803836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.803995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.804014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.807756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.807886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.807905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.811711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.811794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.811813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.815566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.815671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.815690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.819444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.819517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.819535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.823385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.823510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.823530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.604 [2024-07-13 00:31:53.827231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.604 [2024-07-13 00:31:53.827414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.604 [2024-07-13 00:31:53.827437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.831727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.831931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.831959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.835678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.835992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.836019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.839883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.840033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.840053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.843928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.844009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.844028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.847797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.847870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.847890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.851742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.851835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.851854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.855726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.855851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.855870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.859582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.859713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.859732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.863532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.863716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.863740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.867273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.867429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.867447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.871103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.871229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.871264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.874969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.875047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.875066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.878740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.878827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.878847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.882566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.882650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.882669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.886375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.886495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.886514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.890216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.890319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.890337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.863 [2024-07-13 00:31:53.894083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.863 [2024-07-13 00:31:53.894251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.863 [2024-07-13 00:31:53.894269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.897828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.898009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.898030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.901587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.901743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.901762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.905341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.905428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.905447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.909081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.909169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.909188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.912817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.912894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.912913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.916436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.916556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.916574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.920297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.920408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.920426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.924160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.924330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.924348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.927898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.928074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.928093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.931757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.931900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.931935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.935522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.935622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.935652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.939394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.939465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.939483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.943209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.943283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.943301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.947047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.947182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.947215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.950871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.950996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.951014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.954663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.954830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.954848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.958433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.958573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.958592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.962232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.962361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.962380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.965942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.966015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.966034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.969631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.969713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.969731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.973317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.973388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.973407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.977000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.977140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.977159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.980709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.980807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.980826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.984380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.984555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.984573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.988050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.988216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.988235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.991868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.992012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.992030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.995514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.995603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.995634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:53.999287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:53.999374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:53.999392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.864 [2024-07-13 00:31:54.002982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.864 [2024-07-13 00:31:54.003051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.864 [2024-07-13 00:31:54.003069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.006709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.006828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.006848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.010445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.010547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.010566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.014258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.014429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.014448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.017947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.018103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.018122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.021695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.021833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.021852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.025370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.025451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.025468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.028982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.029085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.029103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.032695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.032771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.032798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.036375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.036497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.036515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.040141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.040244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.040273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.044273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.044466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.044485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.048388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.048545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.048564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.052232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.052356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.052375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.056041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.056116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.056135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.059785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.059858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.059876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.063583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.063678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.063697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.067295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.067415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.067433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.071083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.071188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.071206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.074912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.075079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.075098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.078591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.078813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.078848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.082302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.082446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.082465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.086050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.086138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.086156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.865 [2024-07-13 00:31:54.089986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:06.865 [2024-07-13 00:31:54.090087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.865 [2024-07-13 00:31:54.090108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.094092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.094186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.094205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.098157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.098334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.098365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.102133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.102242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.102271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.106078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.106258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.106283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.109912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.110127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.110152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.113797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.113925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.113945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.117632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.117744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.117763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.121339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.121422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.121441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.125129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.125199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.125218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.128818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.128954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.128974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.132545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.132684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.132704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.136316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.136491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.136525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.140080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.140266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.140284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.143912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.144042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.144061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.147574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.147703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.147722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.151295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.127 [2024-07-13 00:31:54.151378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.127 [2024-07-13 00:31:54.151397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.127 [2024-07-13 00:31:54.155059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.155130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.155149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.158801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.158938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.158956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.162623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.162735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.162754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.166406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.166572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.166590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.170139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.170329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.170348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.174017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.174156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.174174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.177809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.177889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.177907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.181526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.181598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.181616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.185160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.185231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.185249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.188941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.189083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.189117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.192687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.192781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.192800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.196424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.196591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.196609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.200236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.200396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.200415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.204048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.204178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.204197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.207770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.207844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.207862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.211450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.211540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.211558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.215171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.215243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.215261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.218932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.219061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.219080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.223052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.223161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.223179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.226869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.227045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.227064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.230650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.230810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.230829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.234488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.234629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.234647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.238322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.238411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.238429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.242210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.242296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.242315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.246090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.246187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.246208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.250044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.250170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.250190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.254111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.254217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.254237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.258158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.258334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.258353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.262170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.262333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.262353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.266269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.266421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.266441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.270280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.270356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.270375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.274294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.274387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.274406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.278188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.278271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.278291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.282167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.282294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.282323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.286142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.286249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.286268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.290207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.290386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.290405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.294120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.294300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.294325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.297952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.298113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.298132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.301823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.301903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.128 [2024-07-13 00:31:54.301921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.128 [2024-07-13 00:31:54.305567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.128 [2024-07-13 00:31:54.305684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.305714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.309311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.309386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.309411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.313176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.313295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.313314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.316875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.316971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.317029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.320734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.320924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.320988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.324528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.324745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.324774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.328295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.328440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.328458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.332053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.332157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.332175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.335856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.335949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.335969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.339666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.339739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.339757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.343380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.343511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.343531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.347268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.347390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.347408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.129 [2024-07-13 00:31:54.351577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.129 [2024-07-13 00:31:54.351811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.129 [2024-07-13 00:31:54.351841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.418 [2024-07-13 00:31:54.356243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.418 [2024-07-13 00:31:54.356505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.418 [2024-07-13 00:31:54.356535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.418 [2024-07-13 00:31:54.360907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.418 [2024-07-13 00:31:54.361116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.418 [2024-07-13 00:31:54.361138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.418 [2024-07-13 00:31:54.365691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.418 [2024-07-13 00:31:54.365779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.418 [2024-07-13 00:31:54.365802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.418 [2024-07-13 00:31:54.370185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.418 [2024-07-13 00:31:54.370305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.418 [2024-07-13 00:31:54.370327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.418 [2024-07-13 00:31:54.375021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.418 [2024-07-13 00:31:54.375111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.418 [2024-07-13 00:31:54.375132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.418 [2024-07-13 00:31:54.379178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.418 [2024-07-13 00:31:54.379334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.418 [2024-07-13 00:31:54.379355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.418 [2024-07-13 00:31:54.383259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.418 [2024-07-13 00:31:54.383402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.418 [2024-07-13 00:31:54.383422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.418 [2024-07-13 00:31:54.387427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.418 [2024-07-13 00:31:54.387605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.418 [2024-07-13 00:31:54.387625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.418 [2024-07-13 00:31:54.391414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d2960) with pdu=0x2000190fef90 00:23:07.418 [2024-07-13 00:31:54.391627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.418 [2024-07-13 00:31:54.391658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.418 00:23:07.418 Latency(us) 00:23:07.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.418 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:07.418 nvme0n1 : 2.00 7834.18 979.27 0.00 0.00 2038.09 1630.95 11975.21 00:23:07.418 =================================================================================================================== 00:23:07.418 Total : 7834.18 979.27 0.00 0.00 2038.09 1630.95 11975.21 00:23:07.418 0 00:23:07.418 00:31:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:07.419 00:31:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:07.419 00:31:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:07.419 00:31:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:07.419 | .driver_specific 00:23:07.419 | .nvme_error 00:23:07.419 | .status_code 00:23:07.419 | .command_transient_transport_error' 00:23:07.701 00:31:54 -- host/digest.sh@71 -- # (( 505 > 0 )) 00:23:07.701 00:31:54 -- host/digest.sh@73 -- # killprocess 97463 00:23:07.701 00:31:54 -- common/autotest_common.sh@926 -- # '[' -z 97463 ']' 00:23:07.701 00:31:54 -- common/autotest_common.sh@930 -- # kill -0 97463 00:23:07.701 00:31:54 -- common/autotest_common.sh@931 -- # uname 00:23:07.701 00:31:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:07.701 00:31:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97463 00:23:07.701 killing process with pid 97463 00:23:07.701 Received shutdown signal, test time was about 2.000000 seconds 00:23:07.701 00:23:07.701 Latency(us) 00:23:07.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.701 =================================================================================================================== 00:23:07.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.701 00:31:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:07.701 00:31:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:07.701 00:31:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97463' 00:23:07.701 00:31:54 -- common/autotest_common.sh@945 -- # kill 97463 00:23:07.701 00:31:54 -- common/autotest_common.sh@950 -- # wait 97463 00:23:07.965 00:31:54 -- host/digest.sh@115 -- # killprocess 97147 00:23:07.965 00:31:54 -- common/autotest_common.sh@926 -- # '[' -z 97147 ']' 00:23:07.965 00:31:54 -- common/autotest_common.sh@930 -- # kill -0 97147 00:23:07.965 00:31:54 -- common/autotest_common.sh@931 -- # uname 00:23:07.965 00:31:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:07.965 00:31:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97147 00:23:07.965 killing process with pid 97147 00:23:07.965 00:31:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:07.965 00:31:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:07.965 00:31:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97147' 00:23:07.965 00:31:55 -- common/autotest_common.sh@945 -- # kill 97147 00:23:07.965 00:31:55 -- common/autotest_common.sh@950 -- # wait 97147 00:23:08.223 00:23:08.223 real 0m18.420s 00:23:08.223 user 0m34.410s 00:23:08.223 sys 0m5.004s 00:23:08.223 00:31:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.223 ************************************ 00:23:08.223 00:31:55 -- common/autotest_common.sh@10 -- # set +x 00:23:08.223 END TEST nvmf_digest_error 00:23:08.223 ************************************ 00:23:08.223 00:31:55 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:08.223 00:31:55 -- host/digest.sh@139 -- # nvmftestfini 00:23:08.223 00:31:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:08.223 00:31:55 -- nvmf/common.sh@116 -- # sync 00:23:08.223 00:31:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:08.223 00:31:55 -- nvmf/common.sh@119 -- # set +e 00:23:08.223 00:31:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:08.223 00:31:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:08.223 rmmod nvme_tcp 00:23:08.223 rmmod nvme_fabrics 00:23:08.223 rmmod nvme_keyring 00:23:08.223 00:31:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:08.223 00:31:55 -- nvmf/common.sh@123 -- # set -e 00:23:08.223 00:31:55 -- nvmf/common.sh@124 -- # return 0 00:23:08.223 00:31:55 -- nvmf/common.sh@477 -- # '[' -n 97147 ']' 00:23:08.223 00:31:55 -- nvmf/common.sh@478 -- # killprocess 97147 00:23:08.223 00:31:55 -- common/autotest_common.sh@926 -- # '[' -z 97147 ']' 00:23:08.223 00:31:55 -- common/autotest_common.sh@930 -- # kill -0 97147 00:23:08.223 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (97147) - No such process 00:23:08.223 Process with pid 97147 is not found 00:23:08.223 00:31:55 -- common/autotest_common.sh@953 -- # echo 'Process with pid 97147 is not found' 00:23:08.223 00:31:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:08.223 00:31:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:08.223 00:31:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:08.224 00:31:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.224 00:31:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:08.224 00:31:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.224 00:31:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.224 00:31:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.482 00:31:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:08.482 00:23:08.482 real 0m38.105s 00:23:08.482 user 1m10.231s 00:23:08.482 sys 0m10.221s 00:23:08.482 00:31:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.482 00:31:55 -- common/autotest_common.sh@10 -- # set +x 00:23:08.482 ************************************ 00:23:08.482 END TEST nvmf_digest 00:23:08.482 ************************************ 00:23:08.482 00:31:55 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:08.482 00:31:55 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:08.482 00:31:55 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:08.482 00:31:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:08.482 00:31:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:08.482 00:31:55 -- common/autotest_common.sh@10 -- # set +x 00:23:08.482 ************************************ 00:23:08.482 START TEST nvmf_mdns_discovery 00:23:08.482 ************************************ 00:23:08.482 00:31:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:08.482 * Looking for test storage... 00:23:08.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:08.482 00:31:55 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:08.482 00:31:55 -- nvmf/common.sh@7 -- # uname -s 00:23:08.483 00:31:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.483 00:31:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.483 00:31:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.483 00:31:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.483 00:31:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.483 00:31:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.483 00:31:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.483 00:31:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.483 00:31:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.483 00:31:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.483 00:31:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:23:08.483 00:31:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:23:08.483 00:31:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.483 00:31:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.483 00:31:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:08.483 00:31:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:08.483 00:31:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.483 00:31:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.483 00:31:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.483 00:31:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.483 00:31:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.483 00:31:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.483 00:31:55 -- paths/export.sh@5 -- # export PATH 00:23:08.483 00:31:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.483 00:31:55 -- nvmf/common.sh@46 -- # : 0 00:23:08.483 00:31:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:08.483 00:31:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:08.483 00:31:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:08.483 00:31:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.483 00:31:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.483 00:31:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:08.483 00:31:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:08.483 00:31:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:08.483 00:31:55 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:08.483 00:31:55 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:08.483 00:31:55 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:08.483 00:31:55 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:08.483 00:31:55 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:08.483 00:31:55 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:08.483 00:31:55 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:08.483 00:31:55 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:08.483 00:31:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:08.483 00:31:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.483 00:31:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:08.483 00:31:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:08.483 00:31:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:08.483 00:31:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.483 00:31:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.483 00:31:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.483 00:31:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:08.483 00:31:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:08.483 00:31:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:08.483 00:31:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:08.483 00:31:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:08.483 00:31:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:08.483 00:31:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.483 00:31:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.483 00:31:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:08.483 00:31:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:08.483 00:31:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:08.483 00:31:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:08.483 00:31:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:08.483 00:31:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.483 00:31:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:08.483 00:31:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:08.483 00:31:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:08.483 00:31:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:08.483 00:31:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:08.483 00:31:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:08.483 Cannot find device "nvmf_tgt_br" 00:23:08.483 00:31:55 -- nvmf/common.sh@154 -- # true 00:23:08.483 00:31:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:08.483 Cannot find device "nvmf_tgt_br2" 00:23:08.483 00:31:55 -- nvmf/common.sh@155 -- # true 00:23:08.483 00:31:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:08.483 00:31:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:08.742 Cannot find device "nvmf_tgt_br" 00:23:08.742 00:31:55 -- nvmf/common.sh@157 -- # true 00:23:08.742 00:31:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:08.742 Cannot find device "nvmf_tgt_br2" 00:23:08.742 00:31:55 -- nvmf/common.sh@158 -- # true 00:23:08.742 00:31:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:08.742 00:31:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:08.742 00:31:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:08.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:08.742 00:31:55 -- nvmf/common.sh@161 -- # true 00:23:08.742 00:31:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:08.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:08.742 00:31:55 -- nvmf/common.sh@162 -- # true 00:23:08.742 00:31:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:08.742 00:31:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:08.742 00:31:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:08.742 00:31:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:08.742 00:31:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:08.742 00:31:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:08.742 00:31:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:08.742 00:31:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:08.742 00:31:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:08.742 00:31:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:08.742 00:31:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:08.742 00:31:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:08.742 00:31:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:08.742 00:31:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:08.742 00:31:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:08.742 00:31:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:08.742 00:31:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:08.742 00:31:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:08.742 00:31:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:09.001 00:31:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:09.001 00:31:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:09.001 00:31:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:09.001 00:31:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:09.001 00:31:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:09.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:23:09.001 00:23:09.001 --- 10.0.0.2 ping statistics --- 00:23:09.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.001 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:09.001 00:31:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:09.001 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:09.001 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:23:09.001 00:23:09.001 --- 10.0.0.3 ping statistics --- 00:23:09.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.001 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:23:09.001 00:31:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:09.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:23:09.001 00:23:09.001 --- 10.0.0.1 ping statistics --- 00:23:09.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.001 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:09.001 00:31:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.001 00:31:56 -- nvmf/common.sh@421 -- # return 0 00:23:09.001 00:31:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:09.001 00:31:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.001 00:31:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:09.001 00:31:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:09.001 00:31:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.001 00:31:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:09.001 00:31:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:09.001 00:31:56 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:09.001 00:31:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:09.001 00:31:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:09.001 00:31:56 -- common/autotest_common.sh@10 -- # set +x 00:23:09.001 00:31:56 -- nvmf/common.sh@469 -- # nvmfpid=97754 00:23:09.001 00:31:56 -- nvmf/common.sh@470 -- # waitforlisten 97754 00:23:09.001 00:31:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:09.001 00:31:56 -- common/autotest_common.sh@819 -- # '[' -z 97754 ']' 00:23:09.001 00:31:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.001 00:31:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:09.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.001 00:31:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.001 00:31:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:09.001 00:31:56 -- common/autotest_common.sh@10 -- # set +x 00:23:09.001 [2024-07-13 00:31:56.126960] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:09.001 [2024-07-13 00:31:56.127065] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.259 [2024-07-13 00:31:56.267941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.259 [2024-07-13 00:31:56.365987] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:09.259 [2024-07-13 00:31:56.366179] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.259 [2024-07-13 00:31:56.366195] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.259 [2024-07-13 00:31:56.366207] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.259 [2024-07-13 00:31:56.366246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.193 00:31:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:10.193 00:31:57 -- common/autotest_common.sh@852 -- # return 0 00:23:10.194 00:31:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:10.194 00:31:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:10.194 00:31:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 00:31:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:10.194 00:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.194 00:31:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 00:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:10.194 00:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.194 00:31:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 00:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:10.194 00:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.194 00:31:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 [2024-07-13 00:31:57.300279] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.194 00:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:10.194 00:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.194 00:31:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 [2024-07-13 00:31:57.308411] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:10.194 00:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:10.194 00:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.194 00:31:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 null0 00:23:10.194 00:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:10.194 00:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.194 00:31:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 null1 00:23:10.194 00:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:10.194 00:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.194 00:31:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 null2 00:23:10.194 00:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:10.194 00:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.194 00:31:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 null3 00:23:10.194 00:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:10.194 00:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.194 00:31:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:10.194 00:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@47 -- # hostpid=97804 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:10.194 00:31:57 -- host/mdns_discovery.sh@48 -- # waitforlisten 97804 /tmp/host.sock 00:23:10.194 00:31:57 -- common/autotest_common.sh@819 -- # '[' -z 97804 ']' 00:23:10.194 00:31:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:23:10.194 00:31:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:10.194 00:31:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:10.194 00:31:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:10.194 00:31:57 -- common/autotest_common.sh@10 -- # set +x 00:23:10.194 [2024-07-13 00:31:57.414855] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:10.194 [2024-07-13 00:31:57.415296] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97804 ] 00:23:10.452 [2024-07-13 00:31:57.558629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.452 [2024-07-13 00:31:57.657021] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:10.452 [2024-07-13 00:31:57.657579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.388 00:31:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:11.388 00:31:58 -- common/autotest_common.sh@852 -- # return 0 00:23:11.388 00:31:58 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:11.388 00:31:58 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:11.388 00:31:58 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:11.388 00:31:58 -- host/mdns_discovery.sh@57 -- # avahipid=97833 00:23:11.388 00:31:58 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:11.388 00:31:58 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:11.388 00:31:58 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:11.388 Process 985 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:11.388 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:11.388 Successfully dropped root privileges. 00:23:11.388 avahi-daemon 0.8 starting up. 00:23:11.388 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:11.388 Successfully called chroot(). 00:23:11.388 Successfully dropped remaining capabilities. 00:23:11.388 No service file found in /etc/avahi/services. 00:23:12.324 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:12.324 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:12.324 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:12.324 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:12.324 Network interface enumeration completed. 00:23:12.324 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:23:12.324 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:12.324 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:23:12.324 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:12.324 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 2682336678. 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:12.324 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.324 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.324 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:12.324 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.324 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.324 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@68 -- # sort 00:23:12.324 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@68 -- # xargs 00:23:12.324 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.324 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.324 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.324 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@64 -- # sort 00:23:12.324 00:31:59 -- host/mdns_discovery.sh@64 -- # xargs 00:23:12.324 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.580 00:31:59 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:12.581 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.581 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.581 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:12.581 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.581 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@68 -- # sort 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@68 -- # xargs 00:23:12.581 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@64 -- # sort 00:23:12.581 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.581 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@64 -- # xargs 00:23:12.581 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:12.581 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.581 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.581 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:12.581 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.581 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@68 -- # sort 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@68 -- # xargs 00:23:12.581 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.581 [2024-07-13 00:31:59.765233] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:12.581 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@64 -- # sort 00:23:12.581 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.581 00:31:59 -- host/mdns_discovery.sh@64 -- # xargs 00:23:12.581 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.837 00:31:59 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:12.837 00:31:59 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:12.837 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.837 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.837 [2024-07-13 00:31:59.837137] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.837 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.837 00:31:59 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:12.837 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.837 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.837 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.837 00:31:59 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:12.837 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.837 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.837 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.837 00:31:59 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:12.837 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.837 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.837 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.837 00:31:59 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:12.837 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.837 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.837 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.837 00:31:59 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:12.837 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.837 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.837 [2024-07-13 00:31:59.877063] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:12.837 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.837 00:31:59 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:12.837 00:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.837 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:23:12.837 [2024-07-13 00:31:59.885052] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:12.837 00:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.837 00:31:59 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=97890 00:23:12.837 00:31:59 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:12.837 00:31:59 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:13.771 [2024-07-13 00:32:00.665239] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:13.771 Established under name 'CDC' 00:23:14.030 [2024-07-13 00:32:01.065305] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:14.030 [2024-07-13 00:32:01.065365] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:23:14.030 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:14.030 cookie is 0 00:23:14.030 is_local: 1 00:23:14.030 our_own: 0 00:23:14.030 wide_area: 0 00:23:14.030 multicast: 1 00:23:14.030 cached: 1 00:23:14.030 [2024-07-13 00:32:01.165271] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:14.030 [2024-07-13 00:32:01.165321] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:23:14.030 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:14.030 cookie is 0 00:23:14.030 is_local: 1 00:23:14.030 our_own: 0 00:23:14.030 wide_area: 0 00:23:14.030 multicast: 1 00:23:14.030 cached: 1 00:23:14.962 [2024-07-13 00:32:02.073855] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:14.962 [2024-07-13 00:32:02.073908] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:14.962 [2024-07-13 00:32:02.073926] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:14.962 [2024-07-13 00:32:02.160001] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:14.962 [2024-07-13 00:32:02.173484] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:14.962 [2024-07-13 00:32:02.173507] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:14.962 [2024-07-13 00:32:02.173521] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:15.220 [2024-07-13 00:32:02.222737] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:15.220 [2024-07-13 00:32:02.222781] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:15.220 [2024-07-13 00:32:02.258900] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:15.220 [2024-07-13 00:32:02.314311] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:15.220 [2024-07-13 00:32:02.314357] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:17.746 00:32:04 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:17.746 00:32:04 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:17.746 00:32:04 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:17.746 00:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.746 00:32:04 -- common/autotest_common.sh@10 -- # set +x 00:23:17.746 00:32:04 -- host/mdns_discovery.sh@80 -- # sort 00:23:17.746 00:32:04 -- host/mdns_discovery.sh@80 -- # xargs 00:23:17.746 00:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.747 00:32:04 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:17.747 00:32:04 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:17.747 00:32:04 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:17.747 00:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.747 00:32:04 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:17.747 00:32:04 -- common/autotest_common.sh@10 -- # set +x 00:23:17.747 00:32:04 -- host/mdns_discovery.sh@76 -- # sort 00:23:17.747 00:32:04 -- host/mdns_discovery.sh@76 -- # xargs 00:23:17.747 00:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.003 00:32:05 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:18.003 00:32:05 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:18.003 00:32:05 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:18.003 00:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.003 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:23:18.003 00:32:05 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:18.003 00:32:05 -- host/mdns_discovery.sh@68 -- # sort 00:23:18.003 00:32:05 -- host/mdns_discovery.sh@68 -- # xargs 00:23:18.003 00:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.003 00:32:05 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:18.003 00:32:05 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:18.003 00:32:05 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:18.003 00:32:05 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.003 00:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.004 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@64 -- # sort 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@64 -- # xargs 00:23:18.004 00:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:18.004 00:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.004 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@72 -- # xargs 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:18.004 00:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:18.004 00:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:18.004 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:18.004 00:32:05 -- host/mdns_discovery.sh@72 -- # xargs 00:23:18.004 00:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.261 00:32:05 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:18.261 00:32:05 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:18.261 00:32:05 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:18.261 00:32:05 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:18.261 00:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.261 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:23:18.261 00:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.261 00:32:05 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:18.261 00:32:05 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:18.261 00:32:05 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:18.261 00:32:05 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:18.261 00:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.261 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:23:18.261 00:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.261 00:32:05 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:18.261 00:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:18.261 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:23:18.261 00:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:18.261 00:32:05 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:19.191 00:32:06 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:19.191 00:32:06 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.191 00:32:06 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:19.191 00:32:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.191 00:32:06 -- host/mdns_discovery.sh@64 -- # sort 00:23:19.191 00:32:06 -- host/mdns_discovery.sh@64 -- # xargs 00:23:19.191 00:32:06 -- common/autotest_common.sh@10 -- # set +x 00:23:19.191 00:32:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.191 00:32:06 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:19.191 00:32:06 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:19.191 00:32:06 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:19.191 00:32:06 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:19.191 00:32:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.191 00:32:06 -- common/autotest_common.sh@10 -- # set +x 00:23:19.191 00:32:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.447 00:32:06 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:19.447 00:32:06 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:19.447 00:32:06 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:19.447 00:32:06 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:19.447 00:32:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.447 00:32:06 -- common/autotest_common.sh@10 -- # set +x 00:23:19.447 [2024-07-13 00:32:06.448152] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:19.447 [2024-07-13 00:32:06.449287] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:19.447 [2024-07-13 00:32:06.449319] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.447 [2024-07-13 00:32:06.449352] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:19.447 [2024-07-13 00:32:06.449366] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:19.447 00:32:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.447 00:32:06 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:19.447 00:32:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:19.447 00:32:06 -- common/autotest_common.sh@10 -- # set +x 00:23:19.447 [2024-07-13 00:32:06.456040] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:19.447 [2024-07-13 00:32:06.456229] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:19.447 [2024-07-13 00:32:06.456270] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:19.447 00:32:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:19.447 00:32:06 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:19.447 [2024-07-13 00:32:06.587330] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:19.447 [2024-07-13 00:32:06.587519] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:19.447 [2024-07-13 00:32:06.646604] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:19.447 [2024-07-13 00:32:06.646633] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:19.447 [2024-07-13 00:32:06.646639] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.447 [2024-07-13 00:32:06.646655] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.447 [2024-07-13 00:32:06.646737] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:19.447 [2024-07-13 00:32:06.646745] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:19.447 [2024-07-13 00:32:06.646750] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:19.447 [2024-07-13 00:32:06.646762] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:19.704 [2024-07-13 00:32:06.692425] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:19.704 [2024-07-13 00:32:06.692443] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.704 [2024-07-13 00:32:06.692479] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:19.704 [2024-07-13 00:32:06.692487] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:20.266 00:32:07 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:20.266 00:32:07 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:20.266 00:32:07 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:20.266 00:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.266 00:32:07 -- host/mdns_discovery.sh@68 -- # sort 00:23:20.266 00:32:07 -- host/mdns_discovery.sh@68 -- # xargs 00:23:20.266 00:32:07 -- common/autotest_common.sh@10 -- # set +x 00:23:20.266 00:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.523 00:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.523 00:32:07 -- common/autotest_common.sh@10 -- # set +x 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@64 -- # sort 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@64 -- # xargs 00:23:20.523 00:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:20.523 00:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.523 00:32:07 -- common/autotest_common.sh@10 -- # set +x 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@72 -- # xargs 00:23:20.523 00:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@72 -- # xargs 00:23:20.523 00:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:20.523 00:32:07 -- common/autotest_common.sh@10 -- # set +x 00:23:20.523 00:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:20.523 00:32:07 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:20.523 00:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.523 00:32:07 -- common/autotest_common.sh@10 -- # set +x 00:23:20.523 00:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.783 00:32:07 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:20.783 00:32:07 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:20.783 00:32:07 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:20.783 00:32:07 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:20.783 00:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.783 00:32:07 -- common/autotest_common.sh@10 -- # set +x 00:23:20.783 [2024-07-13 00:32:07.780958] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:20.783 [2024-07-13 00:32:07.781021] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:20.783 [2024-07-13 00:32:07.781056] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:20.783 [2024-07-13 00:32:07.781070] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:20.783 [2024-07-13 00:32:07.783025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.783 [2024-07-13 00:32:07.783075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.783 [2024-07-13 00:32:07.783087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.783 [2024-07-13 00:32:07.783096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.783 [2024-07-13 00:32:07.783106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.783 [2024-07-13 00:32:07.783115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.783 [2024-07-13 00:32:07.783124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.783 [2024-07-13 00:32:07.783132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.783 [2024-07-13 00:32:07.783140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.783 00:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.783 00:32:07 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:20.783 00:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:20.783 00:32:07 -- common/autotest_common.sh@10 -- # set +x 00:23:20.783 [2024-07-13 00:32:07.788966] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:20.783 [2024-07-13 00:32:07.789034] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:20.783 [2024-07-13 00:32:07.790310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.783 [2024-07-13 00:32:07.790359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.783 [2024-07-13 00:32:07.790369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.783 [2024-07-13 00:32:07.790378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.783 [2024-07-13 00:32:07.790387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.783 [2024-07-13 00:32:07.790395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.783 [2024-07-13 00:32:07.790405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.783 [2024-07-13 00:32:07.790412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.783 [2024-07-13 00:32:07.790420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.783 [2024-07-13 00:32:07.792980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.783 00:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:20.783 00:32:07 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:20.783 [2024-07-13 00:32:07.800281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.783 [2024-07-13 00:32:07.803005] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.783 [2024-07-13 00:32:07.803150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.803197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.803213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.783 [2024-07-13 00:32:07.803223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.783 [2024-07-13 00:32:07.803239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.783 [2024-07-13 00:32:07.803253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.783 [2024-07-13 00:32:07.803261] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.783 [2024-07-13 00:32:07.803272] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.783 [2024-07-13 00:32:07.803286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.783 [2024-07-13 00:32:07.810292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:20.783 [2024-07-13 00:32:07.810380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.810422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.810437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295760 with addr=10.0.0.3, port=4420 00:23:20.783 [2024-07-13 00:32:07.810446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.783 [2024-07-13 00:32:07.810460] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.783 [2024-07-13 00:32:07.810479] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:20.783 [2024-07-13 00:32:07.810486] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:20.783 [2024-07-13 00:32:07.810494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:20.783 [2024-07-13 00:32:07.810507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.783 [2024-07-13 00:32:07.813102] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.783 [2024-07-13 00:32:07.813187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.813228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.813243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.783 [2024-07-13 00:32:07.813252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.783 [2024-07-13 00:32:07.813266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.783 [2024-07-13 00:32:07.813278] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.783 [2024-07-13 00:32:07.813285] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.783 [2024-07-13 00:32:07.813292] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.783 [2024-07-13 00:32:07.813305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.783 [2024-07-13 00:32:07.820350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:20.783 [2024-07-13 00:32:07.820434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.820475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.820489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295760 with addr=10.0.0.3, port=4420 00:23:20.783 [2024-07-13 00:32:07.820498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.783 [2024-07-13 00:32:07.820512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.783 [2024-07-13 00:32:07.820534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:20.783 [2024-07-13 00:32:07.820543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:20.783 [2024-07-13 00:32:07.820551] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:20.783 [2024-07-13 00:32:07.820563] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.783 [2024-07-13 00:32:07.823159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.783 [2024-07-13 00:32:07.823241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.823281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.823296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.783 [2024-07-13 00:32:07.823305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.783 [2024-07-13 00:32:07.823318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.783 [2024-07-13 00:32:07.823330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.783 [2024-07-13 00:32:07.823337] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.783 [2024-07-13 00:32:07.823345] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.783 [2024-07-13 00:32:07.823357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.783 [2024-07-13 00:32:07.830407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:20.783 [2024-07-13 00:32:07.830490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.830532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.783 [2024-07-13 00:32:07.830546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295760 with addr=10.0.0.3, port=4420 00:23:20.783 [2024-07-13 00:32:07.830554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.784 [2024-07-13 00:32:07.830568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.784 [2024-07-13 00:32:07.830580] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:20.784 [2024-07-13 00:32:07.830587] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:20.784 [2024-07-13 00:32:07.830602] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:20.784 [2024-07-13 00:32:07.830614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.784 [2024-07-13 00:32:07.833215] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.784 [2024-07-13 00:32:07.833296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.833336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.833351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.784 [2024-07-13 00:32:07.833359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.784 [2024-07-13 00:32:07.833373] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.784 [2024-07-13 00:32:07.833385] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.784 [2024-07-13 00:32:07.833392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.784 [2024-07-13 00:32:07.833400] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.784 [2024-07-13 00:32:07.833411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.784 [2024-07-13 00:32:07.840465] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:20.784 [2024-07-13 00:32:07.840556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.840599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.840614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295760 with addr=10.0.0.3, port=4420 00:23:20.784 [2024-07-13 00:32:07.840623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.784 [2024-07-13 00:32:07.840682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.784 [2024-07-13 00:32:07.840699] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:20.784 [2024-07-13 00:32:07.840706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:20.784 [2024-07-13 00:32:07.840714] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:20.784 [2024-07-13 00:32:07.840727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.784 [2024-07-13 00:32:07.843270] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.784 [2024-07-13 00:32:07.843352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.843393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.843411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.784 [2024-07-13 00:32:07.843419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.784 [2024-07-13 00:32:07.843433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.784 [2024-07-13 00:32:07.843444] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.784 [2024-07-13 00:32:07.843452] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.784 [2024-07-13 00:32:07.843460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.784 [2024-07-13 00:32:07.843471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.784 [2024-07-13 00:32:07.850527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:20.784 [2024-07-13 00:32:07.850609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.850664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.850681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295760 with addr=10.0.0.3, port=4420 00:23:20.784 [2024-07-13 00:32:07.850690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.784 [2024-07-13 00:32:07.850704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.784 [2024-07-13 00:32:07.850716] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:20.784 [2024-07-13 00:32:07.850724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:20.784 [2024-07-13 00:32:07.850732] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:20.784 [2024-07-13 00:32:07.850744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.784 [2024-07-13 00:32:07.853326] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.784 [2024-07-13 00:32:07.853407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.853447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.853462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.784 [2024-07-13 00:32:07.853470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.784 [2024-07-13 00:32:07.853484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.784 [2024-07-13 00:32:07.853496] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.784 [2024-07-13 00:32:07.853503] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.784 [2024-07-13 00:32:07.853511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.784 [2024-07-13 00:32:07.853522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.784 [2024-07-13 00:32:07.860583] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:20.784 [2024-07-13 00:32:07.860697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.860741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.860756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295760 with addr=10.0.0.3, port=4420 00:23:20.784 [2024-07-13 00:32:07.860764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.784 [2024-07-13 00:32:07.860779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.784 [2024-07-13 00:32:07.860791] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:20.784 [2024-07-13 00:32:07.860798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:20.784 [2024-07-13 00:32:07.860807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:20.784 [2024-07-13 00:32:07.860819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.784 [2024-07-13 00:32:07.863380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.784 [2024-07-13 00:32:07.863460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.863500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.863515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.784 [2024-07-13 00:32:07.863523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.784 [2024-07-13 00:32:07.863537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.784 [2024-07-13 00:32:07.863549] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.784 [2024-07-13 00:32:07.863556] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.784 [2024-07-13 00:32:07.863564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.784 [2024-07-13 00:32:07.863575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.784 [2024-07-13 00:32:07.870650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:20.784 [2024-07-13 00:32:07.870715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.870756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.870771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295760 with addr=10.0.0.3, port=4420 00:23:20.784 [2024-07-13 00:32:07.870780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.784 [2024-07-13 00:32:07.870793] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.784 [2024-07-13 00:32:07.870805] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:20.784 [2024-07-13 00:32:07.870812] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:20.784 [2024-07-13 00:32:07.870820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:20.784 [2024-07-13 00:32:07.870832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.784 [2024-07-13 00:32:07.873434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.784 [2024-07-13 00:32:07.873516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.873557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.873571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.784 [2024-07-13 00:32:07.873580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.784 [2024-07-13 00:32:07.873594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.784 [2024-07-13 00:32:07.873606] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.784 [2024-07-13 00:32:07.873613] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.784 [2024-07-13 00:32:07.873620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.784 [2024-07-13 00:32:07.873643] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.784 [2024-07-13 00:32:07.880720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:20.784 [2024-07-13 00:32:07.880797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.880840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.784 [2024-07-13 00:32:07.880855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295760 with addr=10.0.0.3, port=4420 00:23:20.784 [2024-07-13 00:32:07.880864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.784 [2024-07-13 00:32:07.880879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.785 [2024-07-13 00:32:07.880892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:20.785 [2024-07-13 00:32:07.880899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:20.785 [2024-07-13 00:32:07.880907] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:20.785 [2024-07-13 00:32:07.880920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.785 [2024-07-13 00:32:07.883488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.785 [2024-07-13 00:32:07.883572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.883613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.883638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.785 [2024-07-13 00:32:07.883648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.785 [2024-07-13 00:32:07.883662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.785 [2024-07-13 00:32:07.883674] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.785 [2024-07-13 00:32:07.883682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.785 [2024-07-13 00:32:07.883689] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.785 [2024-07-13 00:32:07.883702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.785 [2024-07-13 00:32:07.890766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:20.785 [2024-07-13 00:32:07.890849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.890890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.890904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295760 with addr=10.0.0.3, port=4420 00:23:20.785 [2024-07-13 00:32:07.890913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.785 [2024-07-13 00:32:07.890926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.785 [2024-07-13 00:32:07.890939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:20.785 [2024-07-13 00:32:07.890946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:20.785 [2024-07-13 00:32:07.890953] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:20.785 [2024-07-13 00:32:07.890965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.785 [2024-07-13 00:32:07.893543] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.785 [2024-07-13 00:32:07.893626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.893683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.893699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.785 [2024-07-13 00:32:07.893707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.785 [2024-07-13 00:32:07.893721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.785 [2024-07-13 00:32:07.893734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.785 [2024-07-13 00:32:07.893741] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.785 [2024-07-13 00:32:07.893749] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.785 [2024-07-13 00:32:07.893761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.785 [2024-07-13 00:32:07.900822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:20.785 [2024-07-13 00:32:07.900890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.900932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.900948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295760 with addr=10.0.0.3, port=4420 00:23:20.785 [2024-07-13 00:32:07.900956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.785 [2024-07-13 00:32:07.900970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.785 [2024-07-13 00:32:07.900982] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:20.785 [2024-07-13 00:32:07.900990] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:20.785 [2024-07-13 00:32:07.900998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:20.785 [2024-07-13 00:32:07.901010] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.785 [2024-07-13 00:32:07.903599] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.785 [2024-07-13 00:32:07.903686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.903727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.903742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.785 [2024-07-13 00:32:07.903751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.785 [2024-07-13 00:32:07.903764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.785 [2024-07-13 00:32:07.903776] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.785 [2024-07-13 00:32:07.903783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.785 [2024-07-13 00:32:07.903791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.785 [2024-07-13 00:32:07.903803] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.785 [2024-07-13 00:32:07.910863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:20.785 [2024-07-13 00:32:07.910947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.910988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.911003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295760 with addr=10.0.0.3, port=4420 00:23:20.785 [2024-07-13 00:32:07.911012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295760 is same with the state(5) to be set 00:23:20.785 [2024-07-13 00:32:07.911026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295760 (9): Bad file descriptor 00:23:20.785 [2024-07-13 00:32:07.911038] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:20.785 [2024-07-13 00:32:07.911046] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:20.785 [2024-07-13 00:32:07.911054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:20.785 [2024-07-13 00:32:07.911066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.785 [2024-07-13 00:32:07.913661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.785 [2024-07-13 00:32:07.913726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.913767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.785 [2024-07-13 00:32:07.913782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x247c2c0 with addr=10.0.0.2, port=4420 00:23:20.785 [2024-07-13 00:32:07.913791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c2c0 is same with the state(5) to be set 00:23:20.785 [2024-07-13 00:32:07.913805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c2c0 (9): Bad file descriptor 00:23:20.785 [2024-07-13 00:32:07.913817] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.785 [2024-07-13 00:32:07.913825] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.785 [2024-07-13 00:32:07.913832] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.785 [2024-07-13 00:32:07.913844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.785 [2024-07-13 00:32:07.920121] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:20.785 [2024-07-13 00:32:07.920165] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:20.785 [2024-07-13 00:32:07.920183] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:20.785 [2024-07-13 00:32:07.920212] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:20.785 [2024-07-13 00:32:07.920225] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:20.785 [2024-07-13 00:32:07.920237] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:20.785 [2024-07-13 00:32:08.006211] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:20.785 [2024-07-13 00:32:08.006276] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:21.719 00:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@68 -- # xargs 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@68 -- # sort 00:23:21.719 00:32:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.719 00:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@64 -- # sort 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@64 -- # xargs 00:23:21.719 00:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:21.719 00:32:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.719 00:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:21.719 00:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:21.719 00:32:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:21.719 00:32:08 -- host/mdns_discovery.sh@72 -- # xargs 00:23:21.719 00:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:21.979 00:32:08 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:21.979 00:32:08 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:21.979 00:32:08 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:21.979 00:32:08 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:21.979 00:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:21.979 00:32:08 -- common/autotest_common.sh@10 -- # set +x 00:23:21.979 00:32:08 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:21.979 00:32:08 -- host/mdns_discovery.sh@72 -- # xargs 00:23:21.979 00:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:21.979 00:32:09 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:21.979 00:32:09 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:21.979 00:32:09 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:21.979 00:32:09 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:21.979 00:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:21.979 00:32:09 -- common/autotest_common.sh@10 -- # set +x 00:23:21.979 00:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:21.979 00:32:09 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:21.979 00:32:09 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:21.979 00:32:09 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:21.979 00:32:09 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:21.979 00:32:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:21.979 00:32:09 -- common/autotest_common.sh@10 -- # set +x 00:23:21.979 00:32:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:21.979 00:32:09 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:21.979 [2024-07-13 00:32:09.165264] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:22.910 00:32:10 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:22.910 00:32:10 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:22.910 00:32:10 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:22.910 00:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:22.910 00:32:10 -- host/mdns_discovery.sh@80 -- # sort 00:23:22.910 00:32:10 -- host/mdns_discovery.sh@80 -- # xargs 00:23:22.910 00:32:10 -- common/autotest_common.sh@10 -- # set +x 00:23:22.910 00:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:23.168 00:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:23.168 00:32:10 -- common/autotest_common.sh@10 -- # set +x 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@68 -- # xargs 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@68 -- # sort 00:23:23.168 00:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:23.168 00:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@64 -- # xargs 00:23:23.168 00:32:10 -- common/autotest_common.sh@10 -- # set +x 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@64 -- # sort 00:23:23.168 00:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:23.168 00:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:23.168 00:32:10 -- common/autotest_common.sh@10 -- # set +x 00:23:23.168 00:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:23.168 00:32:10 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:23.169 00:32:10 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:23.169 00:32:10 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:23.169 00:32:10 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:23.169 00:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:23.169 00:32:10 -- common/autotest_common.sh@10 -- # set +x 00:23:23.169 00:32:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:23.169 00:32:10 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:23.169 00:32:10 -- common/autotest_common.sh@640 -- # local es=0 00:23:23.169 00:32:10 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:23.169 00:32:10 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:23.169 00:32:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:23.169 00:32:10 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:23.169 00:32:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:23.169 00:32:10 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:23.169 00:32:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:23.169 00:32:10 -- common/autotest_common.sh@10 -- # set +x 00:23:23.169 [2024-07-13 00:32:10.333478] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:23.169 2024/07/13 00:32:10 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:23.169 request: 00:23:23.169 { 00:23:23.169 "method": "bdev_nvme_start_mdns_discovery", 00:23:23.169 "params": { 00:23:23.169 "name": "mdns", 00:23:23.169 "svcname": "_nvme-disc._http", 00:23:23.169 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:23.169 } 00:23:23.169 } 00:23:23.169 Got JSON-RPC error response 00:23:23.169 GoRPCClient: error on JSON-RPC call 00:23:23.169 00:32:10 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:23.169 00:32:10 -- common/autotest_common.sh@643 -- # es=1 00:23:23.169 00:32:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:23.169 00:32:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:23.169 00:32:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:23.169 00:32:10 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:23.810 [2024-07-13 00:32:10.722239] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:23.810 [2024-07-13 00:32:10.822242] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:23.810 [2024-07-13 00:32:10.922263] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:23.810 [2024-07-13 00:32:10.922301] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:23:23.810 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:23.810 cookie is 0 00:23:23.810 is_local: 1 00:23:23.810 our_own: 0 00:23:23.810 wide_area: 0 00:23:23.810 multicast: 1 00:23:23.810 cached: 1 00:23:23.810 [2024-07-13 00:32:11.022281] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:23.810 [2024-07-13 00:32:11.022333] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:23:23.810 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:23.810 cookie is 0 00:23:23.810 is_local: 1 00:23:23.810 our_own: 0 00:23:23.810 wide_area: 0 00:23:23.810 multicast: 1 00:23:23.810 cached: 1 00:23:24.745 [2024-07-13 00:32:11.935393] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:24.745 [2024-07-13 00:32:11.935424] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:24.745 [2024-07-13 00:32:11.935455] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:25.003 [2024-07-13 00:32:12.021476] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:25.003 [2024-07-13 00:32:12.035157] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:25.003 [2024-07-13 00:32:12.035175] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:25.003 [2024-07-13 00:32:12.035191] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:25.003 [2024-07-13 00:32:12.091513] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:25.004 [2024-07-13 00:32:12.091537] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:25.004 [2024-07-13 00:32:12.121609] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:25.004 [2024-07-13 00:32:12.180281] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:25.004 [2024-07-13 00:32:12.180304] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:28.287 00:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.287 00:32:15 -- common/autotest_common.sh@10 -- # set +x 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@80 -- # sort 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@80 -- # xargs 00:23:28.287 00:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@76 -- # sort 00:23:28.287 00:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@76 -- # xargs 00:23:28.287 00:32:15 -- common/autotest_common.sh@10 -- # set +x 00:23:28.287 00:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@64 -- # sort 00:23:28.287 00:32:15 -- host/mdns_discovery.sh@64 -- # xargs 00:23:28.287 00:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.287 00:32:15 -- common/autotest_common.sh@10 -- # set +x 00:23:28.287 00:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:28.546 00:32:15 -- common/autotest_common.sh@640 -- # local es=0 00:23:28.546 00:32:15 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:28.546 00:32:15 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:28.546 00:32:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:28.546 00:32:15 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:28.546 00:32:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:28.546 00:32:15 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:28.546 00:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.546 00:32:15 -- common/autotest_common.sh@10 -- # set +x 00:23:28.546 [2024-07-13 00:32:15.529323] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:28.546 2024/07/13 00:32:15 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:28.546 request: 00:23:28.546 { 00:23:28.546 "method": "bdev_nvme_start_mdns_discovery", 00:23:28.546 "params": { 00:23:28.546 "name": "cdc", 00:23:28.546 "svcname": "_nvme-disc._tcp", 00:23:28.546 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:28.546 } 00:23:28.546 } 00:23:28.546 Got JSON-RPC error response 00:23:28.546 GoRPCClient: error on JSON-RPC call 00:23:28.546 00:32:15 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:28.546 00:32:15 -- common/autotest_common.sh@643 -- # es=1 00:23:28.546 00:32:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:28.546 00:32:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:28.546 00:32:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@76 -- # sort 00:23:28.546 00:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.546 00:32:15 -- common/autotest_common.sh@10 -- # set +x 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@76 -- # xargs 00:23:28.546 00:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:28.546 00:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.546 00:32:15 -- common/autotest_common.sh@10 -- # set +x 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@64 -- # sort 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@64 -- # xargs 00:23:28.546 00:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:28.546 00:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.546 00:32:15 -- common/autotest_common.sh@10 -- # set +x 00:23:28.546 00:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@197 -- # kill 97804 00:23:28.546 00:32:15 -- host/mdns_discovery.sh@200 -- # wait 97804 00:23:28.805 [2024-07-13 00:32:15.803477] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:28.805 00:32:15 -- host/mdns_discovery.sh@201 -- # kill 97890 00:23:28.805 Got SIGTERM, quitting. 00:23:28.805 00:32:15 -- host/mdns_discovery.sh@202 -- # kill 97833 00:23:28.805 00:32:15 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:28.805 00:32:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:28.805 00:32:15 -- nvmf/common.sh@116 -- # sync 00:23:28.805 Got SIGTERM, quitting. 00:23:28.805 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:28.805 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:28.805 avahi-daemon 0.8 exiting. 00:23:28.805 00:32:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:28.805 00:32:15 -- nvmf/common.sh@119 -- # set +e 00:23:28.805 00:32:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:28.805 00:32:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:28.805 rmmod nvme_tcp 00:23:28.805 rmmod nvme_fabrics 00:23:28.805 rmmod nvme_keyring 00:23:28.805 00:32:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:29.064 00:32:16 -- nvmf/common.sh@123 -- # set -e 00:23:29.064 00:32:16 -- nvmf/common.sh@124 -- # return 0 00:23:29.064 00:32:16 -- nvmf/common.sh@477 -- # '[' -n 97754 ']' 00:23:29.064 00:32:16 -- nvmf/common.sh@478 -- # killprocess 97754 00:23:29.064 00:32:16 -- common/autotest_common.sh@926 -- # '[' -z 97754 ']' 00:23:29.064 00:32:16 -- common/autotest_common.sh@930 -- # kill -0 97754 00:23:29.064 00:32:16 -- common/autotest_common.sh@931 -- # uname 00:23:29.064 00:32:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:29.064 00:32:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97754 00:23:29.064 00:32:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:29.064 00:32:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:29.064 killing process with pid 97754 00:23:29.064 00:32:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97754' 00:23:29.064 00:32:16 -- common/autotest_common.sh@945 -- # kill 97754 00:23:29.064 00:32:16 -- common/autotest_common.sh@950 -- # wait 97754 00:23:29.324 00:32:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:29.324 00:32:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:29.324 00:32:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:29.324 00:32:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.324 00:32:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:29.324 00:32:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.324 00:32:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.324 00:32:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.324 00:32:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:29.324 00:23:29.324 real 0m20.822s 00:23:29.324 user 0m40.541s 00:23:29.324 sys 0m2.098s 00:23:29.324 00:32:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:29.324 00:32:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.324 ************************************ 00:23:29.324 END TEST nvmf_mdns_discovery 00:23:29.324 ************************************ 00:23:29.324 00:32:16 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:29.324 00:32:16 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:29.324 00:32:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:29.324 00:32:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:29.324 00:32:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.324 ************************************ 00:23:29.324 START TEST nvmf_multipath 00:23:29.324 ************************************ 00:23:29.324 00:32:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:29.324 * Looking for test storage... 00:23:29.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:29.324 00:32:16 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:29.324 00:32:16 -- nvmf/common.sh@7 -- # uname -s 00:23:29.324 00:32:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.324 00:32:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.324 00:32:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.324 00:32:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.324 00:32:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.324 00:32:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.324 00:32:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.324 00:32:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.324 00:32:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.324 00:32:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.324 00:32:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:23:29.324 00:32:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:23:29.324 00:32:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.324 00:32:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.324 00:32:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:29.324 00:32:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:29.324 00:32:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.324 00:32:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.324 00:32:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.324 00:32:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.324 00:32:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.324 00:32:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.324 00:32:16 -- paths/export.sh@5 -- # export PATH 00:23:29.324 00:32:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.324 00:32:16 -- nvmf/common.sh@46 -- # : 0 00:23:29.324 00:32:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:29.324 00:32:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:29.324 00:32:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:29.324 00:32:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.324 00:32:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.324 00:32:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:29.324 00:32:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:29.324 00:32:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:29.324 00:32:16 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:29.324 00:32:16 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:29.324 00:32:16 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:29.324 00:32:16 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:29.324 00:32:16 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.324 00:32:16 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:29.324 00:32:16 -- host/multipath.sh@30 -- # nvmftestinit 00:23:29.324 00:32:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:29.324 00:32:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.324 00:32:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:29.324 00:32:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:29.324 00:32:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:29.324 00:32:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.324 00:32:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.324 00:32:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.324 00:32:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:29.324 00:32:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:29.324 00:32:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:29.324 00:32:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:29.324 00:32:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:29.324 00:32:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:29.324 00:32:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.324 00:32:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.324 00:32:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:29.324 00:32:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:29.324 00:32:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:29.324 00:32:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:29.324 00:32:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:29.324 00:32:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.324 00:32:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:29.324 00:32:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:29.324 00:32:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:29.324 00:32:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:29.324 00:32:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:29.324 00:32:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:29.584 Cannot find device "nvmf_tgt_br" 00:23:29.584 00:32:16 -- nvmf/common.sh@154 -- # true 00:23:29.584 00:32:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.584 Cannot find device "nvmf_tgt_br2" 00:23:29.584 00:32:16 -- nvmf/common.sh@155 -- # true 00:23:29.584 00:32:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:29.584 00:32:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:29.584 Cannot find device "nvmf_tgt_br" 00:23:29.584 00:32:16 -- nvmf/common.sh@157 -- # true 00:23:29.584 00:32:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:29.584 Cannot find device "nvmf_tgt_br2" 00:23:29.584 00:32:16 -- nvmf/common.sh@158 -- # true 00:23:29.584 00:32:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:29.584 00:32:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:29.584 00:32:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.584 00:32:16 -- nvmf/common.sh@161 -- # true 00:23:29.584 00:32:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.584 00:32:16 -- nvmf/common.sh@162 -- # true 00:23:29.584 00:32:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:29.584 00:32:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:29.584 00:32:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:29.584 00:32:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:29.584 00:32:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:29.584 00:32:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:29.584 00:32:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:29.584 00:32:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:29.584 00:32:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:29.584 00:32:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:29.584 00:32:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:29.584 00:32:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:29.584 00:32:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:29.584 00:32:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:29.584 00:32:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:29.584 00:32:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:29.584 00:32:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:29.584 00:32:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:29.584 00:32:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:29.584 00:32:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:29.843 00:32:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:29.843 00:32:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:29.843 00:32:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:29.843 00:32:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:29.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:23:29.843 00:23:29.843 --- 10.0.0.2 ping statistics --- 00:23:29.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.843 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:23:29.843 00:32:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:29.843 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:29.843 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:23:29.843 00:23:29.843 --- 10.0.0.3 ping statistics --- 00:23:29.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.843 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:29.843 00:32:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:29.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:29.843 00:23:29.843 --- 10.0.0.1 ping statistics --- 00:23:29.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.843 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:29.843 00:32:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.843 00:32:16 -- nvmf/common.sh@421 -- # return 0 00:23:29.843 00:32:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:29.843 00:32:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.843 00:32:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:29.843 00:32:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:29.843 00:32:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.843 00:32:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:29.843 00:32:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:29.843 00:32:16 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:29.843 00:32:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:29.843 00:32:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:29.843 00:32:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.843 00:32:16 -- nvmf/common.sh@469 -- # nvmfpid=98393 00:23:29.843 00:32:16 -- nvmf/common.sh@470 -- # waitforlisten 98393 00:23:29.843 00:32:16 -- common/autotest_common.sh@819 -- # '[' -z 98393 ']' 00:23:29.843 00:32:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:29.843 00:32:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.843 00:32:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:29.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.843 00:32:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.843 00:32:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:29.843 00:32:16 -- common/autotest_common.sh@10 -- # set +x 00:23:29.843 [2024-07-13 00:32:16.940191] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:29.843 [2024-07-13 00:32:16.940281] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.103 [2024-07-13 00:32:17.081923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:30.103 [2024-07-13 00:32:17.176962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:30.103 [2024-07-13 00:32:17.177195] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.103 [2024-07-13 00:32:17.177212] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.103 [2024-07-13 00:32:17.177223] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.103 [2024-07-13 00:32:17.177402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.103 [2024-07-13 00:32:17.177897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.039 00:32:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:31.039 00:32:17 -- common/autotest_common.sh@852 -- # return 0 00:23:31.039 00:32:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:31.039 00:32:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:31.039 00:32:17 -- common/autotest_common.sh@10 -- # set +x 00:23:31.039 00:32:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.039 00:32:17 -- host/multipath.sh@33 -- # nvmfapp_pid=98393 00:23:31.039 00:32:17 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:31.039 [2024-07-13 00:32:18.236751] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.039 00:32:18 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:31.606 Malloc0 00:23:31.606 00:32:18 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:31.606 00:32:18 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.865 00:32:19 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.123 [2024-07-13 00:32:19.225963] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.123 00:32:19 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:32.381 [2024-07-13 00:32:19.430048] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:32.381 00:32:19 -- host/multipath.sh@44 -- # bdevperf_pid=98499 00:23:32.381 00:32:19 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:32.381 00:32:19 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.381 00:32:19 -- host/multipath.sh@47 -- # waitforlisten 98499 /var/tmp/bdevperf.sock 00:23:32.381 00:32:19 -- common/autotest_common.sh@819 -- # '[' -z 98499 ']' 00:23:32.381 00:32:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.381 00:32:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:32.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.381 00:32:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.381 00:32:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:32.381 00:32:19 -- common/autotest_common.sh@10 -- # set +x 00:23:33.317 00:32:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:33.317 00:32:20 -- common/autotest_common.sh@852 -- # return 0 00:23:33.317 00:32:20 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:33.575 00:32:20 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:33.833 Nvme0n1 00:23:34.092 00:32:21 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:34.350 Nvme0n1 00:23:34.350 00:32:21 -- host/multipath.sh@78 -- # sleep 1 00:23:34.350 00:32:21 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:35.284 00:32:22 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:35.284 00:32:22 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:35.542 00:32:22 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:35.799 00:32:22 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:35.799 00:32:22 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98393 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:35.799 00:32:22 -- host/multipath.sh@65 -- # dtrace_pid=98586 00:23:35.799 00:32:22 -- host/multipath.sh@66 -- # sleep 6 00:23:42.354 00:32:28 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:42.354 00:32:28 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:42.355 00:32:29 -- host/multipath.sh@67 -- # active_port=4421 00:23:42.355 00:32:29 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:42.355 Attaching 4 probes... 00:23:42.355 @path[10.0.0.2, 4421]: 20566 00:23:42.355 @path[10.0.0.2, 4421]: 20888 00:23:42.355 @path[10.0.0.2, 4421]: 20528 00:23:42.355 @path[10.0.0.2, 4421]: 20802 00:23:42.355 @path[10.0.0.2, 4421]: 20906 00:23:42.355 00:32:29 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:42.355 00:32:29 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:42.355 00:32:29 -- host/multipath.sh@69 -- # sed -n 1p 00:23:42.355 00:32:29 -- host/multipath.sh@69 -- # port=4421 00:23:42.355 00:32:29 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:42.355 00:32:29 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:42.355 00:32:29 -- host/multipath.sh@72 -- # kill 98586 00:23:42.355 00:32:29 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:42.355 00:32:29 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:42.355 00:32:29 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:42.355 00:32:29 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:42.612 00:32:29 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:42.612 00:32:29 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98393 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:42.612 00:32:29 -- host/multipath.sh@65 -- # dtrace_pid=98717 00:23:42.612 00:32:29 -- host/multipath.sh@66 -- # sleep 6 00:23:49.168 00:32:35 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:49.168 00:32:35 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:49.168 00:32:35 -- host/multipath.sh@67 -- # active_port=4420 00:23:49.168 00:32:35 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:49.168 Attaching 4 probes... 00:23:49.168 @path[10.0.0.2, 4420]: 20618 00:23:49.168 @path[10.0.0.2, 4420]: 20989 00:23:49.168 @path[10.0.0.2, 4420]: 21007 00:23:49.168 @path[10.0.0.2, 4420]: 20915 00:23:49.168 @path[10.0.0.2, 4420]: 21012 00:23:49.168 00:32:35 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:49.168 00:32:35 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:49.168 00:32:35 -- host/multipath.sh@69 -- # sed -n 1p 00:23:49.168 00:32:35 -- host/multipath.sh@69 -- # port=4420 00:23:49.168 00:32:35 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:49.168 00:32:35 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:49.168 00:32:35 -- host/multipath.sh@72 -- # kill 98717 00:23:49.168 00:32:35 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:49.168 00:32:35 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:49.168 00:32:35 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:49.168 00:32:36 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:49.426 00:32:36 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:49.426 00:32:36 -- host/multipath.sh@65 -- # dtrace_pid=98853 00:23:49.426 00:32:36 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98393 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:49.426 00:32:36 -- host/multipath.sh@66 -- # sleep 6 00:23:55.984 00:32:42 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:55.984 00:32:42 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:55.984 00:32:42 -- host/multipath.sh@67 -- # active_port=4421 00:23:55.984 00:32:42 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:55.984 Attaching 4 probes... 00:23:55.984 @path[10.0.0.2, 4421]: 16967 00:23:55.984 @path[10.0.0.2, 4421]: 20718 00:23:55.984 @path[10.0.0.2, 4421]: 20722 00:23:55.984 @path[10.0.0.2, 4421]: 20588 00:23:55.984 @path[10.0.0.2, 4421]: 20781 00:23:55.984 00:32:42 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:55.984 00:32:42 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:55.984 00:32:42 -- host/multipath.sh@69 -- # sed -n 1p 00:23:55.984 00:32:42 -- host/multipath.sh@69 -- # port=4421 00:23:55.984 00:32:42 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:55.984 00:32:42 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:55.984 00:32:42 -- host/multipath.sh@72 -- # kill 98853 00:23:55.984 00:32:42 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:55.985 00:32:42 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:55.985 00:32:42 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:55.985 00:32:42 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:55.985 00:32:43 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:55.985 00:32:43 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98393 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:55.985 00:32:43 -- host/multipath.sh@65 -- # dtrace_pid=98982 00:23:55.985 00:32:43 -- host/multipath.sh@66 -- # sleep 6 00:24:02.539 00:32:49 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:02.539 00:32:49 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:02.539 00:32:49 -- host/multipath.sh@67 -- # active_port= 00:24:02.539 00:32:49 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:02.539 Attaching 4 probes... 00:24:02.539 00:24:02.539 00:24:02.539 00:24:02.539 00:24:02.539 00:24:02.539 00:32:49 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:02.539 00:32:49 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:02.539 00:32:49 -- host/multipath.sh@69 -- # sed -n 1p 00:24:02.539 00:32:49 -- host/multipath.sh@69 -- # port= 00:24:02.539 00:32:49 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:02.539 00:32:49 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:02.539 00:32:49 -- host/multipath.sh@72 -- # kill 98982 00:24:02.539 00:32:49 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:02.539 00:32:49 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:02.539 00:32:49 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:02.539 00:32:49 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:02.797 00:32:49 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:02.797 00:32:49 -- host/multipath.sh@65 -- # dtrace_pid=99114 00:24:02.797 00:32:49 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98393 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:02.797 00:32:49 -- host/multipath.sh@66 -- # sleep 6 00:24:09.370 00:32:55 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:09.370 00:32:55 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:09.370 00:32:56 -- host/multipath.sh@67 -- # active_port=4421 00:24:09.370 00:32:56 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:09.370 Attaching 4 probes... 00:24:09.370 @path[10.0.0.2, 4421]: 20102 00:24:09.370 @path[10.0.0.2, 4421]: 21273 00:24:09.370 @path[10.0.0.2, 4421]: 21357 00:24:09.370 @path[10.0.0.2, 4421]: 21436 00:24:09.370 @path[10.0.0.2, 4421]: 21502 00:24:09.370 00:32:56 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:09.370 00:32:56 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:09.370 00:32:56 -- host/multipath.sh@69 -- # sed -n 1p 00:24:09.370 00:32:56 -- host/multipath.sh@69 -- # port=4421 00:24:09.370 00:32:56 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:09.370 00:32:56 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:09.370 00:32:56 -- host/multipath.sh@72 -- # kill 99114 00:24:09.370 00:32:56 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:09.370 00:32:56 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:09.370 [2024-07-13 00:32:56.381165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.370 [2024-07-13 00:32:56.381442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 [2024-07-13 00:32:56.381898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bad90 is same with the state(5) to be set 00:24:09.371 00:32:56 -- host/multipath.sh@101 -- # sleep 1 00:24:10.314 00:32:57 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:10.314 00:32:57 -- host/multipath.sh@65 -- # dtrace_pid=99244 00:24:10.314 00:32:57 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98393 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:10.314 00:32:57 -- host/multipath.sh@66 -- # sleep 6 00:24:16.880 00:33:03 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:16.880 00:33:03 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:16.880 00:33:03 -- host/multipath.sh@67 -- # active_port=4420 00:24:16.880 00:33:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:16.880 Attaching 4 probes... 00:24:16.880 @path[10.0.0.2, 4420]: 20746 00:24:16.880 @path[10.0.0.2, 4420]: 21191 00:24:16.880 @path[10.0.0.2, 4420]: 20334 00:24:16.880 @path[10.0.0.2, 4420]: 20569 00:24:16.880 @path[10.0.0.2, 4420]: 20016 00:24:16.880 00:33:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:16.880 00:33:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:16.880 00:33:03 -- host/multipath.sh@69 -- # sed -n 1p 00:24:16.880 00:33:03 -- host/multipath.sh@69 -- # port=4420 00:24:16.880 00:33:03 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:16.880 00:33:03 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:16.880 00:33:03 -- host/multipath.sh@72 -- # kill 99244 00:24:16.880 00:33:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:16.880 00:33:03 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:16.880 [2024-07-13 00:33:03.902282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:16.880 00:33:03 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:17.137 00:33:04 -- host/multipath.sh@111 -- # sleep 6 00:24:23.696 00:33:10 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:23.696 00:33:10 -- host/multipath.sh@65 -- # dtrace_pid=99441 00:24:23.696 00:33:10 -- host/multipath.sh@66 -- # sleep 6 00:24:23.696 00:33:10 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98393 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:30.265 00:33:16 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:30.265 00:33:16 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:30.265 00:33:16 -- host/multipath.sh@67 -- # active_port=4421 00:24:30.265 00:33:16 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:30.265 Attaching 4 probes... 00:24:30.265 @path[10.0.0.2, 4421]: 20409 00:24:30.265 @path[10.0.0.2, 4421]: 20827 00:24:30.265 @path[10.0.0.2, 4421]: 20859 00:24:30.265 @path[10.0.0.2, 4421]: 20752 00:24:30.265 @path[10.0.0.2, 4421]: 20810 00:24:30.265 00:33:16 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:30.265 00:33:16 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:30.265 00:33:16 -- host/multipath.sh@69 -- # sed -n 1p 00:24:30.265 00:33:16 -- host/multipath.sh@69 -- # port=4421 00:24:30.265 00:33:16 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:30.265 00:33:16 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:30.265 00:33:16 -- host/multipath.sh@72 -- # kill 99441 00:24:30.265 00:33:16 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:30.265 00:33:16 -- host/multipath.sh@114 -- # killprocess 98499 00:24:30.265 00:33:16 -- common/autotest_common.sh@926 -- # '[' -z 98499 ']' 00:24:30.265 00:33:16 -- common/autotest_common.sh@930 -- # kill -0 98499 00:24:30.265 00:33:16 -- common/autotest_common.sh@931 -- # uname 00:24:30.265 00:33:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:30.265 00:33:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98499 00:24:30.265 killing process with pid 98499 00:24:30.265 00:33:16 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:30.265 00:33:16 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:30.265 00:33:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98499' 00:24:30.265 00:33:16 -- common/autotest_common.sh@945 -- # kill 98499 00:24:30.266 00:33:16 -- common/autotest_common.sh@950 -- # wait 98499 00:24:30.266 Connection closed with partial response: 00:24:30.266 00:24:30.266 00:24:30.266 00:33:16 -- host/multipath.sh@116 -- # wait 98499 00:24:30.266 00:33:16 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:30.266 [2024-07-13 00:32:19.505039] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:30.266 [2024-07-13 00:32:19.505173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98499 ] 00:24:30.266 [2024-07-13 00:32:19.648184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.266 [2024-07-13 00:32:19.745485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.266 Running I/O for 90 seconds... 00:24:30.266 [2024-07-13 00:32:29.600350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.266 [2024-07-13 00:32:29.600435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.600525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.600546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.600567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.600600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.600621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.600652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.600712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.600731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.600753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.600769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.600790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.600806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.600827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.266 [2024-07-13 00:32:29.600843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.600865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.600881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.600902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.266 [2024-07-13 00:32:29.600918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.600939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.600982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.601025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.266 [2024-07-13 00:32:29.601091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.266 [2024-07-13 00:32:29.601126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.266 [2024-07-13 00:32:29.601160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.601195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.266 [2024-07-13 00:32:29.601231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.601267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.601302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.266 [2024-07-13 00:32:29.601337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.601371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.601405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.601439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.266 [2024-07-13 00:32:29.601967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.601993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.266 [2024-07-13 00:32:29.602011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.266 [2024-07-13 00:32:29.602493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.266 [2024-07-13 00:32:29.602507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.602543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.602577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.602611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.602662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.267 [2024-07-13 00:32:29.602716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.602751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.602787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.267 [2024-07-13 00:32:29.602823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.602861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.602905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.602942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.602965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.267 [2024-07-13 00:32:29.602981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.267 [2024-07-13 00:32:29.603065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.267 [2024-07-13 00:32:29.603100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.267 [2024-07-13 00:32:29.603204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.267 [2024-07-13 00:32:29.603308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.267 [2024-07-13 00:32:29.603421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.267 [2024-07-13 00:32:29.603456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.603982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.603997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.604017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.267 [2024-07-13 00:32:29.604032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:30.267 [2024-07-13 00:32:29.604052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.604136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.604172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.604250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.604285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.604354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.604458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.604493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.604527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.604782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.604857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.604930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.604966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.604987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.605003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.605024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.605069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.605089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.605103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.605123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.605138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.605157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.605172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.605199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.605214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.605234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.605249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.605270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.605286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.605306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.605321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.606287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.606314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.606339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.606354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.606374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.606388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.606408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.606423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.606442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.606456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.606475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.606490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.606509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.606524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.606543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.268 [2024-07-13 00:32:29.606557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.606576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.268 [2024-07-13 00:32:29.606601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:30.268 [2024-07-13 00:32:29.606623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:29.606637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:29.606669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:29.606686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:29.606724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:29.606739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:29.606759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:29.606773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:29.606792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:29.606807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:29.606829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:29.606844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:29.606864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:29.606879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.130562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.130671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.130733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.130755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.130779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.130796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.130817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.130833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.130855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.130896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.130920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.130937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.130958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.130973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.130994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.131216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.131313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.131541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.269 [2024-07-13 00:32:36.131661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.131683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.131714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.132081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.132120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.132144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.132160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.132182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.132197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.132228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.132244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:30.269 [2024-07-13 00:32:36.132264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.269 [2024-07-13 00:32:36.132279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.132315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.132351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.132386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.132421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.132456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.132491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.132526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.132560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.132595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.132648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.132742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.132785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.132823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.132865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.132905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.132952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.132975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.132991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.133552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.133587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.270 [2024-07-13 00:32:36.133677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.270 [2024-07-13 00:32:36.133782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:30.270 [2024-07-13 00:32:36.133805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.133821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.133844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.133860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.133883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.133899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.133922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.133938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.133960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.133976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.134043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.134377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.134411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.134481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.134515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.134585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.134619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.134702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.134751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.134833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.134857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.134874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.135150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.135223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.135267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.135311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.135354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.135408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.135449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.135491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.135542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.135586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.135644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.135688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.271 [2024-07-13 00:32:36.135761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.135806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.271 [2024-07-13 00:32:36.135852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.271 [2024-07-13 00:32:36.135881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.135896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.135925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:36.135940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.135969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.135985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.136044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.136086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.136151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.136195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:36.136236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:36.136277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:36.136318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.136360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.136402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:36.136444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.136486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.136528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:36.136569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:36.136610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.136692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:36.136731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:36.136749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.097498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:43.097567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.097685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.097708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.097733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:43.097750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.097771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:43.097787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.097808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.097824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.097845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.097861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.097882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.097897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.097918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.097934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.097955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.097986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.098051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.098085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.098140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.098173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.098207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.098240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:43.098273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:43.098307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:43.098341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.098375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.098409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.098443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.272 [2024-07-13 00:32:43.098476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.272 [2024-07-13 00:32:43.098509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:30.272 [2024-07-13 00:32:43.098528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.098550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.098571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.098586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.098606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.098637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.099538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.099565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.099594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.099610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.099669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.099712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.099741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.099758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.099783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.099799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.099824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.099840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.099865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.099883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.099909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.099925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.099951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.099967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.099992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.100402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.100441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.100479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.100518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.100566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.100768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.100809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.100972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.100997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.101028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.101067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.101082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.101105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.101120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.101152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.101169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.101192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.101207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.101231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.101246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.101270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.273 [2024-07-13 00:32:43.101286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.101310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.273 [2024-07-13 00:32:43.101325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:30.273 [2024-07-13 00:32:43.101348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.101363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.101387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.101402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.101425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.101455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.101479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.101494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.101517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.101531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.101554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.101569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.101781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.101807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.101839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.101869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.101900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.101916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.101944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.101960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.102033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.102075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.102201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.102284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.102326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.102368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.102586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.102707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.274 [2024-07-13 00:32:43.102797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.102958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.102989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.103026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.103056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.103083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.103098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.103125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.103141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:43.103168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.274 [2024-07-13 00:32:43.103183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.274 [2024-07-13 00:32:56.382222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.382979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.382991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.275 [2024-07-13 00:32:56.383387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.275 [2024-07-13 00:32:56.383412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.275 [2024-07-13 00:32:56.383425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.275 [2024-07-13 00:32:56.383438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.276 [2024-07-13 00:32:56.383717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.276 [2024-07-13 00:32:56.383768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.276 [2024-07-13 00:32:56.383794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.276 [2024-07-13 00:32:56.383820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.276 [2024-07-13 00:32:56.383845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.276 [2024-07-13 00:32:56.383905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.276 [2024-07-13 00:32:56.383941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.383979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.383991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.276 [2024-07-13 00:32:56.384429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.276 [2024-07-13 00:32:56.384456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.276 [2024-07-13 00:32:56.384533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.276 [2024-07-13 00:32:56.384570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.276 [2024-07-13 00:32:56.384610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.276 [2024-07-13 00:32:56.384637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.384650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.384663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.384703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.384718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.384734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.384748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.384769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.384783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.384798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.384813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.384828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.384842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.384863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.384878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.384894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.384908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.384923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.384936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.384952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.384974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.384998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.385305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.385330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.385355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.385500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.385525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.385602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.277 [2024-07-13 00:32:56.385627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.277 [2024-07-13 00:32:56.385887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.277 [2024-07-13 00:32:56.385900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f1790 is same with the state(5) to be set 00:24:30.277 [2024-07-13 00:32:56.385921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.277 [2024-07-13 00:32:56.385931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.277 [2024-07-13 00:32:56.385941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117944 len:8 PRP1 0x0 PRP2 0x0 00:24:30.277 [2024-07-13 00:32:56.385953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.278 [2024-07-13 00:32:56.386018] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17f1790 was disconnected and freed. reset controller. 00:24:30.278 [2024-07-13 00:32:56.387155] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.278 [2024-07-13 00:32:56.387235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19be980 (9): Bad file descriptor 00:24:30.278 [2024-07-13 00:32:56.387363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.278 [2024-07-13 00:32:56.387417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:30.278 [2024-07-13 00:32:56.387437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19be980 with addr=10.0.0.2, port=4421 00:24:30.278 [2024-07-13 00:32:56.387451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19be980 is same with the state(5) to be set 00:24:30.278 [2024-07-13 00:32:56.387474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19be980 (9): Bad file descriptor 00:24:30.278 [2024-07-13 00:32:56.387496] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.278 [2024-07-13 00:32:56.387510] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.278 [2024-07-13 00:32:56.387523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.278 [2024-07-13 00:32:56.387558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.278 [2024-07-13 00:32:56.387574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.278 [2024-07-13 00:33:06.440235] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:30.278 Received shutdown signal, test time was about 55.017099 seconds 00:24:30.278 00:24:30.278 Latency(us) 00:24:30.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.278 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:30.278 Verification LBA range: start 0x0 length 0x4000 00:24:30.278 Nvme0n1 : 55.02 11896.82 46.47 0.00 0.00 10742.63 927.19 7015926.69 00:24:30.278 =================================================================================================================== 00:24:30.278 Total : 11896.82 46.47 0.00 0.00 10742.63 927.19 7015926.69 00:24:30.278 00:33:16 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:30.278 00:33:17 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:30.278 00:33:17 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:30.278 00:33:17 -- host/multipath.sh@125 -- # nvmftestfini 00:24:30.278 00:33:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:30.278 00:33:17 -- nvmf/common.sh@116 -- # sync 00:24:30.278 00:33:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:30.278 00:33:17 -- nvmf/common.sh@119 -- # set +e 00:24:30.278 00:33:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:30.278 00:33:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:30.278 rmmod nvme_tcp 00:24:30.278 rmmod nvme_fabrics 00:24:30.278 rmmod nvme_keyring 00:24:30.278 00:33:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:30.278 00:33:17 -- nvmf/common.sh@123 -- # set -e 00:24:30.278 00:33:17 -- nvmf/common.sh@124 -- # return 0 00:24:30.278 00:33:17 -- nvmf/common.sh@477 -- # '[' -n 98393 ']' 00:24:30.278 00:33:17 -- nvmf/common.sh@478 -- # killprocess 98393 00:24:30.278 00:33:17 -- common/autotest_common.sh@926 -- # '[' -z 98393 ']' 00:24:30.278 00:33:17 -- common/autotest_common.sh@930 -- # kill -0 98393 00:24:30.278 00:33:17 -- common/autotest_common.sh@931 -- # uname 00:24:30.278 00:33:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:30.278 00:33:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98393 00:24:30.278 killing process with pid 98393 00:24:30.278 00:33:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:30.278 00:33:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:30.278 00:33:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98393' 00:24:30.278 00:33:17 -- common/autotest_common.sh@945 -- # kill 98393 00:24:30.278 00:33:17 -- common/autotest_common.sh@950 -- # wait 98393 00:24:30.537 00:33:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:30.537 00:33:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:30.537 00:33:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:30.537 00:33:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.538 00:33:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:30.538 00:33:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.538 00:33:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.538 00:33:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.538 00:33:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:30.538 00:24:30.538 real 1m1.148s 00:24:30.538 user 2m50.830s 00:24:30.538 sys 0m14.955s 00:24:30.538 00:33:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:30.538 ************************************ 00:24:30.538 END TEST nvmf_multipath 00:24:30.538 ************************************ 00:24:30.538 00:33:17 -- common/autotest_common.sh@10 -- # set +x 00:24:30.538 00:33:17 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:30.538 00:33:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:30.538 00:33:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:30.538 00:33:17 -- common/autotest_common.sh@10 -- # set +x 00:24:30.538 ************************************ 00:24:30.538 START TEST nvmf_timeout 00:24:30.538 ************************************ 00:24:30.538 00:33:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:30.538 * Looking for test storage... 00:24:30.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:30.538 00:33:17 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:30.538 00:33:17 -- nvmf/common.sh@7 -- # uname -s 00:24:30.538 00:33:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.538 00:33:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.538 00:33:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.538 00:33:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.538 00:33:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.538 00:33:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.538 00:33:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.538 00:33:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.538 00:33:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.538 00:33:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.538 00:33:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:24:30.538 00:33:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:24:30.538 00:33:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.538 00:33:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.538 00:33:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:30.538 00:33:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:30.538 00:33:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.538 00:33:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.538 00:33:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.538 00:33:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.538 00:33:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.538 00:33:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.538 00:33:17 -- paths/export.sh@5 -- # export PATH 00:24:30.538 00:33:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.538 00:33:17 -- nvmf/common.sh@46 -- # : 0 00:24:30.538 00:33:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:30.538 00:33:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:30.538 00:33:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:30.538 00:33:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.538 00:33:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.538 00:33:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:30.538 00:33:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:30.538 00:33:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:30.538 00:33:17 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:30.538 00:33:17 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:30.538 00:33:17 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:30.538 00:33:17 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:30.538 00:33:17 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:30.538 00:33:17 -- host/timeout.sh@19 -- # nvmftestinit 00:24:30.538 00:33:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:30.538 00:33:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.538 00:33:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:30.538 00:33:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:30.538 00:33:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:30.538 00:33:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.538 00:33:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.538 00:33:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.538 00:33:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:30.538 00:33:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:30.538 00:33:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:30.538 00:33:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:30.538 00:33:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:30.538 00:33:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:30.538 00:33:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.538 00:33:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.538 00:33:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:30.538 00:33:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:30.538 00:33:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:30.538 00:33:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:30.538 00:33:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:30.538 00:33:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.538 00:33:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:30.538 00:33:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:30.538 00:33:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:30.538 00:33:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:30.538 00:33:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:30.538 00:33:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:30.538 Cannot find device "nvmf_tgt_br" 00:24:30.538 00:33:17 -- nvmf/common.sh@154 -- # true 00:24:30.538 00:33:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:30.538 Cannot find device "nvmf_tgt_br2" 00:24:30.538 00:33:17 -- nvmf/common.sh@155 -- # true 00:24:30.538 00:33:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:30.538 00:33:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:30.797 Cannot find device "nvmf_tgt_br" 00:24:30.797 00:33:17 -- nvmf/common.sh@157 -- # true 00:24:30.797 00:33:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:30.797 Cannot find device "nvmf_tgt_br2" 00:24:30.797 00:33:17 -- nvmf/common.sh@158 -- # true 00:24:30.797 00:33:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:30.797 00:33:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:30.797 00:33:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:30.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:30.797 00:33:17 -- nvmf/common.sh@161 -- # true 00:24:30.797 00:33:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:30.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:30.797 00:33:17 -- nvmf/common.sh@162 -- # true 00:24:30.797 00:33:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:30.797 00:33:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:30.797 00:33:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:30.797 00:33:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:30.797 00:33:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:30.797 00:33:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:30.797 00:33:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:30.797 00:33:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:30.797 00:33:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:30.797 00:33:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:30.797 00:33:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:30.797 00:33:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:30.797 00:33:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:30.797 00:33:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:30.797 00:33:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:30.797 00:33:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:30.797 00:33:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:30.797 00:33:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:30.797 00:33:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:30.797 00:33:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:31.056 00:33:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:31.056 00:33:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:31.056 00:33:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:31.056 00:33:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:31.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:24:31.056 00:24:31.056 --- 10.0.0.2 ping statistics --- 00:24:31.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.056 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:24:31.056 00:33:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:31.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:31.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:24:31.056 00:24:31.056 --- 10.0.0.3 ping statistics --- 00:24:31.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.056 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:24:31.056 00:33:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:31.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:24:31.056 00:24:31.056 --- 10.0.0.1 ping statistics --- 00:24:31.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.056 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:24:31.056 00:33:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.056 00:33:18 -- nvmf/common.sh@421 -- # return 0 00:24:31.056 00:33:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:31.056 00:33:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.056 00:33:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:31.056 00:33:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:31.056 00:33:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.056 00:33:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:31.056 00:33:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:31.056 00:33:18 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:31.056 00:33:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:31.056 00:33:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:31.056 00:33:18 -- common/autotest_common.sh@10 -- # set +x 00:24:31.056 00:33:18 -- nvmf/common.sh@469 -- # nvmfpid=99754 00:24:31.056 00:33:18 -- nvmf/common.sh@470 -- # waitforlisten 99754 00:24:31.056 00:33:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:31.056 00:33:18 -- common/autotest_common.sh@819 -- # '[' -z 99754 ']' 00:24:31.056 00:33:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.056 00:33:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:31.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.056 00:33:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.056 00:33:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:31.056 00:33:18 -- common/autotest_common.sh@10 -- # set +x 00:24:31.056 [2024-07-13 00:33:18.156096] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:31.056 [2024-07-13 00:33:18.156174] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.316 [2024-07-13 00:33:18.297772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:31.316 [2024-07-13 00:33:18.370603] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:31.316 [2024-07-13 00:33:18.370791] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.316 [2024-07-13 00:33:18.370803] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.316 [2024-07-13 00:33:18.370812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.316 [2024-07-13 00:33:18.370925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.316 [2024-07-13 00:33:18.371332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.940 00:33:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:31.940 00:33:19 -- common/autotest_common.sh@852 -- # return 0 00:24:31.940 00:33:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:31.940 00:33:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:31.940 00:33:19 -- common/autotest_common.sh@10 -- # set +x 00:24:32.198 00:33:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.198 00:33:19 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.198 00:33:19 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:32.198 [2024-07-13 00:33:19.428100] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.457 00:33:19 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:32.715 Malloc0 00:24:32.715 00:33:19 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.973 00:33:19 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.973 00:33:20 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.231 [2024-07-13 00:33:20.424219] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.231 00:33:20 -- host/timeout.sh@32 -- # bdevperf_pid=99851 00:24:33.231 00:33:20 -- host/timeout.sh@34 -- # waitforlisten 99851 /var/tmp/bdevperf.sock 00:24:33.231 00:33:20 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:33.231 00:33:20 -- common/autotest_common.sh@819 -- # '[' -z 99851 ']' 00:24:33.231 00:33:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.231 00:33:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:33.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.231 00:33:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.231 00:33:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:33.231 00:33:20 -- common/autotest_common.sh@10 -- # set +x 00:24:33.490 [2024-07-13 00:33:20.482853] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:33.490 [2024-07-13 00:33:20.482915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99851 ] 00:24:33.490 [2024-07-13 00:33:20.618100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.490 [2024-07-13 00:33:20.696938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.422 00:33:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:34.422 00:33:21 -- common/autotest_common.sh@852 -- # return 0 00:24:34.422 00:33:21 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:34.422 00:33:21 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:34.986 NVMe0n1 00:24:34.986 00:33:21 -- host/timeout.sh@51 -- # rpc_pid=99893 00:24:34.986 00:33:21 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:34.986 00:33:21 -- host/timeout.sh@53 -- # sleep 1 00:24:34.986 Running I/O for 10 seconds... 00:24:35.919 00:33:22 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.179 [2024-07-13 00:33:23.184309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.179 [2024-07-13 00:33:23.184475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.184807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed8b30 is same with the state(5) to be set 00:24:36.180 [2024-07-13 00:33:23.185188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.180 [2024-07-13 00:33:23.185496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.180 [2024-07-13 00:33:23.185516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.180 [2024-07-13 00:33:23.185533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.180 [2024-07-13 00:33:23.185567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.180 [2024-07-13 00:33:23.185651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.180 [2024-07-13 00:33:23.185667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.180 [2024-07-13 00:33:23.185686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.180 [2024-07-13 00:33:23.185703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.180 [2024-07-13 00:33:23.185713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.180 [2024-07-13 00:33:23.185734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.185990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.185999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.181 [2024-07-13 00:33:23.186079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.181 [2024-07-13 00:33:23.186097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.181 [2024-07-13 00:33:23.186114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.181 [2024-07-13 00:33:23.186147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.181 [2024-07-13 00:33:23.186164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.181 [2024-07-13 00:33:23.186335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.181 [2024-07-13 00:33:23.186370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.181 [2024-07-13 00:33:23.186406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.181 [2024-07-13 00:33:23.186449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.181 [2024-07-13 00:33:23.186466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.181 [2024-07-13 00:33:23.186483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.181 [2024-07-13 00:33:23.186492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.181 [2024-07-13 00:33:23.186500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.186533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.186552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.186569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.186630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.186684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.186720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.186737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.186772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.186972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.186990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.186999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.187008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.187027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.187044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.187062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.187079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.187098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.187115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.187132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.187155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.187172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.187190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.187208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.187226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.182 [2024-07-13 00:33:23.187244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.182 [2024-07-13 00:33:23.187262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.182 [2024-07-13 00:33:23.187271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.183 [2024-07-13 00:33:23.187280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.183 [2024-07-13 00:33:23.187297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.183 [2024-07-13 00:33:23.187315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.183 [2024-07-13 00:33:23.187333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.183 [2024-07-13 00:33:23.187350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.183 [2024-07-13 00:33:23.187366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.183 [2024-07-13 00:33:23.187383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.183 [2024-07-13 00:33:23.187400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.183 [2024-07-13 00:33:23.187417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.183 [2024-07-13 00:33:23.187445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.183 [2024-07-13 00:33:23.187463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.183 [2024-07-13 00:33:23.187480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.183 [2024-07-13 00:33:23.187498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a218a0 is same with the state(5) to be set 00:24:36.183 [2024-07-13 00:33:23.187517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:36.183 [2024-07-13 00:33:23.187524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:36.183 [2024-07-13 00:33:23.187531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5744 len:8 PRP1 0x0 PRP2 0x0 00:24:36.183 [2024-07-13 00:33:23.187539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.183 [2024-07-13 00:33:23.187598] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a218a0 was disconnected and freed. reset controller. 00:24:36.183 [2024-07-13 00:33:23.187805] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.183 [2024-07-13 00:33:23.187884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a035e0 (9): Bad file descriptor 00:24:36.183 [2024-07-13 00:33:23.191712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a035e0 (9): Bad file descriptor 00:24:36.183 [2024-07-13 00:33:23.191741] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.183 [2024-07-13 00:33:23.191752] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.183 [2024-07-13 00:33:23.191762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.183 [2024-07-13 00:33:23.191780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.183 [2024-07-13 00:33:23.191790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.183 00:33:23 -- host/timeout.sh@56 -- # sleep 2 00:24:38.080 [2024-07-13 00:33:25.191863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-13 00:33:25.191933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.080 [2024-07-13 00:33:25.191951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a035e0 with addr=10.0.0.2, port=4420 00:24:38.080 [2024-07-13 00:33:25.191962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a035e0 is same with the state(5) to be set 00:24:38.080 [2024-07-13 00:33:25.191980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a035e0 (9): Bad file descriptor 00:24:38.080 [2024-07-13 00:33:25.191996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.080 [2024-07-13 00:33:25.192006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.080 [2024-07-13 00:33:25.192015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.080 [2024-07-13 00:33:25.192034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.080 [2024-07-13 00:33:25.192044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.080 00:33:25 -- host/timeout.sh@57 -- # get_controller 00:24:38.080 00:33:25 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.080 00:33:25 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:38.338 00:33:25 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:38.338 00:33:25 -- host/timeout.sh@58 -- # get_bdev 00:24:38.338 00:33:25 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:38.338 00:33:25 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:38.595 00:33:25 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:38.595 00:33:25 -- host/timeout.sh@61 -- # sleep 5 00:24:39.969 [2024-07-13 00:33:27.192112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.969 [2024-07-13 00:33:27.192176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.969 [2024-07-13 00:33:27.192193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a035e0 with addr=10.0.0.2, port=4420 00:24:39.969 [2024-07-13 00:33:27.192204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a035e0 is same with the state(5) to be set 00:24:39.969 [2024-07-13 00:33:27.192222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a035e0 (9): Bad file descriptor 00:24:39.969 [2024-07-13 00:33:27.192238] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.969 [2024-07-13 00:33:27.192247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.969 [2024-07-13 00:33:27.192255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.969 [2024-07-13 00:33:27.192275] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.969 [2024-07-13 00:33:27.192285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.494 [2024-07-13 00:33:29.192303] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.494 [2024-07-13 00:33:29.192337] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.494 [2024-07-13 00:33:29.192347] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.494 [2024-07-13 00:33:29.192356] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:42.494 [2024-07-13 00:33:29.192374] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.059 00:24:43.059 Latency(us) 00:24:43.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.059 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:43.059 Verification LBA range: start 0x0 length 0x4000 00:24:43.059 NVMe0n1 : 8.13 2097.72 8.19 15.75 0.00 60491.22 2517.18 7015926.69 00:24:43.059 =================================================================================================================== 00:24:43.059 Total : 2097.72 8.19 15.75 0.00 60491.22 2517.18 7015926.69 00:24:43.059 0 00:24:43.623 00:33:30 -- host/timeout.sh@62 -- # get_controller 00:24:43.623 00:33:30 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.623 00:33:30 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:43.880 00:33:31 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:43.880 00:33:31 -- host/timeout.sh@63 -- # get_bdev 00:24:43.880 00:33:31 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:43.880 00:33:31 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:44.138 00:33:31 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:44.138 00:33:31 -- host/timeout.sh@65 -- # wait 99893 00:24:44.138 00:33:31 -- host/timeout.sh@67 -- # killprocess 99851 00:24:44.138 00:33:31 -- common/autotest_common.sh@926 -- # '[' -z 99851 ']' 00:24:44.138 00:33:31 -- common/autotest_common.sh@930 -- # kill -0 99851 00:24:44.138 00:33:31 -- common/autotest_common.sh@931 -- # uname 00:24:44.138 00:33:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:44.138 00:33:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99851 00:24:44.138 00:33:31 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:44.138 00:33:31 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:44.138 killing process with pid 99851 00:24:44.138 00:33:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99851' 00:24:44.138 Received shutdown signal, test time was about 9.270133 seconds 00:24:44.138 00:24:44.138 Latency(us) 00:24:44.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.138 =================================================================================================================== 00:24:44.138 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.138 00:33:31 -- common/autotest_common.sh@945 -- # kill 99851 00:24:44.138 00:33:31 -- common/autotest_common.sh@950 -- # wait 99851 00:24:44.396 00:33:31 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.653 [2024-07-13 00:33:31.826110] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.653 00:33:31 -- host/timeout.sh@74 -- # bdevperf_pid=100051 00:24:44.653 00:33:31 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:44.653 00:33:31 -- host/timeout.sh@76 -- # waitforlisten 100051 /var/tmp/bdevperf.sock 00:24:44.653 00:33:31 -- common/autotest_common.sh@819 -- # '[' -z 100051 ']' 00:24:44.653 00:33:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.653 00:33:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:44.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.653 00:33:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.653 00:33:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:44.653 00:33:31 -- common/autotest_common.sh@10 -- # set +x 00:24:44.911 [2024-07-13 00:33:31.889679] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:44.911 [2024-07-13 00:33:31.889755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100051 ] 00:24:44.911 [2024-07-13 00:33:32.025435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.911 [2024-07-13 00:33:32.104331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.870 00:33:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:45.870 00:33:32 -- common/autotest_common.sh@852 -- # return 0 00:24:45.870 00:33:32 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:45.870 00:33:33 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:46.127 NVMe0n1 00:24:46.384 00:33:33 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:46.384 00:33:33 -- host/timeout.sh@84 -- # rpc_pid=100094 00:24:46.384 00:33:33 -- host/timeout.sh@86 -- # sleep 1 00:24:46.384 Running I/O for 10 seconds... 00:24:47.316 00:33:34 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.577 [2024-07-13 00:33:34.601157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.601560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207e7a0 is same with the state(5) to be set 00:24:47.577 [2024-07-13 00:33:34.602083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-07-13 00:33:34.602332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-07-13 00:33:34.602341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-07-13 00:33:34.602712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-07-13 00:33:34.602787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-07-13 00:33:34.602805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-07-13 00:33:34.602823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-07-13 00:33:34.602878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.602934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-07-13 00:33:34.602967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-07-13 00:33:34.602984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.602994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.603002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.603012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.603020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.603047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.603055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.603065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.603073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.603082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-07-13 00:33:34.603090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.603099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.603107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.603117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.603124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.603133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-07-13 00:33:34.603142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.603151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-07-13 00:33:34.603159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.603169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-07-13 00:33:34.603176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.603186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-07-13 00:33:34.603193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-07-13 00:33:34.603203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.603928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-07-13 00:33:34.603981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.603990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-07-13 00:33:34.604015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-07-13 00:33:34.604024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.580 [2024-07-13 00:33:34.604175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.580 [2024-07-13 00:33:34.604192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.580 [2024-07-13 00:33:34.604233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.580 [2024-07-13 00:33:34.604249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.580 [2024-07-13 00:33:34.604267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.580 [2024-07-13 00:33:34.604284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.580 [2024-07-13 00:33:34.604300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.580 [2024-07-13 00:33:34.604318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.580 [2024-07-13 00:33:34.604356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-07-13 00:33:34.604521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ac780 is same with the state(5) to be set 00:24:47.580 [2024-07-13 00:33:34.604541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.580 [2024-07-13 00:33:34.604547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.580 [2024-07-13 00:33:34.604554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7528 len:8 PRP1 0x0 PRP2 0x0 00:24:47.580 [2024-07-13 00:33:34.604562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-07-13 00:33:34.604622] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12ac780 was disconnected and freed. reset controller. 00:24:47.580 [2024-07-13 00:33:34.604906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.580 [2024-07-13 00:33:34.604985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128e5e0 (9): Bad file descriptor 00:24:47.580 [2024-07-13 00:33:34.605150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.580 [2024-07-13 00:33:34.605192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.580 [2024-07-13 00:33:34.605206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128e5e0 with addr=10.0.0.2, port=4420 00:24:47.580 [2024-07-13 00:33:34.605215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128e5e0 is same with the state(5) to be set 00:24:47.580 [2024-07-13 00:33:34.605231] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128e5e0 (9): Bad file descriptor 00:24:47.580 [2024-07-13 00:33:34.605261] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:47.580 [2024-07-13 00:33:34.605269] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:47.580 [2024-07-13 00:33:34.605278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.580 [2024-07-13 00:33:34.605297] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.580 [2024-07-13 00:33:34.605307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.580 00:33:34 -- host/timeout.sh@90 -- # sleep 1 00:24:48.510 [2024-07-13 00:33:35.605381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.510 [2024-07-13 00:33:35.605445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.510 [2024-07-13 00:33:35.605461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128e5e0 with addr=10.0.0.2, port=4420 00:24:48.510 [2024-07-13 00:33:35.605473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128e5e0 is same with the state(5) to be set 00:24:48.510 [2024-07-13 00:33:35.605492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128e5e0 (9): Bad file descriptor 00:24:48.510 [2024-07-13 00:33:35.605507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.510 [2024-07-13 00:33:35.605516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.510 [2024-07-13 00:33:35.605525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.510 [2024-07-13 00:33:35.605545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.510 [2024-07-13 00:33:35.605555] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.510 00:33:35 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.767 [2024-07-13 00:33:35.859423] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.767 00:33:35 -- host/timeout.sh@92 -- # wait 100094 00:24:49.697 [2024-07-13 00:33:36.621840] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:56.289 00:24:56.289 Latency(us) 00:24:56.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.289 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:56.289 Verification LBA range: start 0x0 length 0x4000 00:24:56.289 NVMe0n1 : 10.00 10715.17 41.86 0.00 0.00 11925.56 700.04 3019898.88 00:24:56.289 =================================================================================================================== 00:24:56.289 Total : 10715.17 41.86 0.00 0.00 11925.56 700.04 3019898.88 00:24:56.289 0 00:24:56.289 00:33:43 -- host/timeout.sh@97 -- # rpc_pid=100215 00:24:56.289 00:33:43 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:56.289 00:33:43 -- host/timeout.sh@98 -- # sleep 1 00:24:56.546 Running I/O for 10 seconds... 00:24:57.480 00:33:44 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.738 [2024-07-13 00:33:44.713729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.738 [2024-07-13 00:33:44.714407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.738 [2024-07-13 00:33:44.714495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.738 [2024-07-13 00:33:44.714577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.738 [2024-07-13 00:33:44.714654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.738 [2024-07-13 00:33:44.714740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.738 [2024-07-13 00:33:44.714806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.738 [2024-07-13 00:33:44.714855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.738 [2024-07-13 00:33:44.714901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.738 [2024-07-13 00:33:44.714969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.738 [2024-07-13 00:33:44.715016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.738 [2024-07-13 00:33:44.715074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.715947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.716989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.717074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.717130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed97b0 is same with the state(5) to be set 00:24:57.739 [2024-07-13 00:33:44.717658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.717982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.717991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.718002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.718011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.718021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.718029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.718039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.718048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.718058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.718066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.718076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.718084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.718094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.739 [2024-07-13 00:33:44.718103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.718116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.739 [2024-07-13 00:33:44.718125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.739 [2024-07-13 00:33:44.718135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.740 [2024-07-13 00:33:44.718972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.718983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.718991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.740 [2024-07-13 00:33:44.719002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.740 [2024-07-13 00:33:44.719011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.741 [2024-07-13 00:33:44.719574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.741 [2024-07-13 00:33:44.719915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.741 [2024-07-13 00:33:44.719939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.719965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.719974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.719984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.742 [2024-07-13 00:33:44.719993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.742 [2024-07-13 00:33:44.720095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.742 [2024-07-13 00:33:44.720178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.742 [2024-07-13 00:33:44.720196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.742 [2024-07-13 00:33:44.720215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.742 [2024-07-13 00:33:44.720366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bce20 is same with the state(5) to be set 00:24:57.742 [2024-07-13 00:33:44.720387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.742 [2024-07-13 00:33:44.720394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.742 [2024-07-13 00:33:44.720402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7400 len:8 PRP1 0x0 PRP2 0x0 00:24:57.742 [2024-07-13 00:33:44.720410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.742 [2024-07-13 00:33:44.720472] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12bce20 was disconnected and freed. reset controller. 00:24:57.742 [2024-07-13 00:33:44.720746] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 [2024-07-13 00:33:44.720822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128e5e0 (9): Bad file descriptor 00:24:57.742 [2024-07-13 00:33:44.720937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-13 00:33:44.720985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.742 [2024-07-13 00:33:44.721001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128e5e0 with addr=10.0.0.2, port=4420 00:24:57.742 [2024-07-13 00:33:44.721011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128e5e0 is same with the state(5) to be set 00:24:57.742 [2024-07-13 00:33:44.721031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128e5e0 (9): Bad file descriptor 00:24:57.742 [2024-07-13 00:33:44.721076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.742 [2024-07-13 00:33:44.721085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.742 [2024-07-13 00:33:44.721110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.742 [2024-07-13 00:33:44.721129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.742 [2024-07-13 00:33:44.721138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.742 00:33:44 -- host/timeout.sh@101 -- # sleep 3 00:24:58.676 [2024-07-13 00:33:45.721206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-13 00:33:45.721266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.676 [2024-07-13 00:33:45.721282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128e5e0 with addr=10.0.0.2, port=4420 00:24:58.676 [2024-07-13 00:33:45.721292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128e5e0 is same with the state(5) to be set 00:24:58.676 [2024-07-13 00:33:45.721309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128e5e0 (9): Bad file descriptor 00:24:58.676 [2024-07-13 00:33:45.721323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.676 [2024-07-13 00:33:45.721331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.676 [2024-07-13 00:33:45.721339] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.676 [2024-07-13 00:33:45.721356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.676 [2024-07-13 00:33:45.721366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.609 [2024-07-13 00:33:46.721439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-13 00:33:46.721503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.609 [2024-07-13 00:33:46.721519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128e5e0 with addr=10.0.0.2, port=4420 00:24:59.609 [2024-07-13 00:33:46.721530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128e5e0 is same with the state(5) to be set 00:24:59.609 [2024-07-13 00:33:46.721549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128e5e0 (9): Bad file descriptor 00:24:59.609 [2024-07-13 00:33:46.721564] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.609 [2024-07-13 00:33:46.721572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.609 [2024-07-13 00:33:46.721581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.609 [2024-07-13 00:33:46.721601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.609 [2024-07-13 00:33:46.721610] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.543 [2024-07-13 00:33:47.723312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.543 [2024-07-13 00:33:47.723375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.543 [2024-07-13 00:33:47.723391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128e5e0 with addr=10.0.0.2, port=4420 00:25:00.543 [2024-07-13 00:33:47.723402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128e5e0 is same with the state(5) to be set 00:25:00.543 [2024-07-13 00:33:47.723591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128e5e0 (9): Bad file descriptor 00:25:00.543 [2024-07-13 00:33:47.723777] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.543 [2024-07-13 00:33:47.723793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.543 [2024-07-13 00:33:47.723811] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.543 [2024-07-13 00:33:47.725954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.543 [2024-07-13 00:33:47.725976] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.543 00:33:47 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.801 [2024-07-13 00:33:47.982279] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.801 00:33:48 -- host/timeout.sh@103 -- # wait 100215 00:25:01.734 [2024-07-13 00:33:48.743106] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.990 00:25:06.990 Latency(us) 00:25:06.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.990 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:06.990 Verification LBA range: start 0x0 length 0x4000 00:25:06.990 NVMe0n1 : 10.01 9218.14 36.01 6763.50 0.00 7993.18 603.23 3019898.88 00:25:06.990 =================================================================================================================== 00:25:06.990 Total : 9218.14 36.01 6763.50 0.00 7993.18 0.00 3019898.88 00:25:06.990 0 00:25:06.990 00:33:53 -- host/timeout.sh@105 -- # killprocess 100051 00:25:06.990 00:33:53 -- common/autotest_common.sh@926 -- # '[' -z 100051 ']' 00:25:06.990 00:33:53 -- common/autotest_common.sh@930 -- # kill -0 100051 00:25:06.990 00:33:53 -- common/autotest_common.sh@931 -- # uname 00:25:06.991 00:33:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:06.991 00:33:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100051 00:25:06.991 killing process with pid 100051 00:25:06.991 Received shutdown signal, test time was about 10.000000 seconds 00:25:06.991 00:25:06.991 Latency(us) 00:25:06.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.991 =================================================================================================================== 00:25:06.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.991 00:33:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:06.991 00:33:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:06.991 00:33:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100051' 00:25:06.991 00:33:53 -- common/autotest_common.sh@945 -- # kill 100051 00:25:06.991 00:33:53 -- common/autotest_common.sh@950 -- # wait 100051 00:25:06.991 00:33:53 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:06.991 00:33:53 -- host/timeout.sh@110 -- # bdevperf_pid=100337 00:25:06.991 00:33:53 -- host/timeout.sh@112 -- # waitforlisten 100337 /var/tmp/bdevperf.sock 00:25:06.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.991 00:33:53 -- common/autotest_common.sh@819 -- # '[' -z 100337 ']' 00:25:06.991 00:33:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.991 00:33:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:06.991 00:33:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.991 00:33:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:06.991 00:33:53 -- common/autotest_common.sh@10 -- # set +x 00:25:06.991 [2024-07-13 00:33:53.944174] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:06.991 [2024-07-13 00:33:53.944590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100337 ] 00:25:06.991 [2024-07-13 00:33:54.084877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.991 [2024-07-13 00:33:54.152435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.923 00:33:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:07.923 00:33:54 -- common/autotest_common.sh@852 -- # return 0 00:25:07.923 00:33:54 -- host/timeout.sh@116 -- # dtrace_pid=100368 00:25:07.923 00:33:54 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100337 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:07.923 00:33:54 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:07.923 00:33:55 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:08.490 NVMe0n1 00:25:08.490 00:33:55 -- host/timeout.sh@124 -- # rpc_pid=100419 00:25:08.490 00:33:55 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:08.490 00:33:55 -- host/timeout.sh@125 -- # sleep 1 00:25:08.490 Running I/O for 10 seconds... 00:25:09.423 00:33:56 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:09.684 [2024-07-13 00:33:56.684328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.684 [2024-07-13 00:33:56.684490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.684996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.685 [2024-07-13 00:33:56.685279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddc20 is same with the state(5) to be set 00:25:09.686 [2024-07-13 00:33:56.685779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.685819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.685843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.685855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.685866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.685876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.685887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.685897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.685909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.685919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.685929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.685939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.685950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.685960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.685970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.685979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.685990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.686 [2024-07-13 00:33:56.686446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.686 [2024-07-13 00:33:56.686454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.686986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.686996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.687 [2024-07-13 00:33:56.687350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.687 [2024-07-13 00:33:56.687359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.687982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.687991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.688002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.688011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.688021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.688030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.688041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.688050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.688061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.688070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.688080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.688090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.688101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.688117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.688128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.688138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.688149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.688158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.688169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.688184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.688 [2024-07-13 00:33:56.688195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.688 [2024-07-13 00:33:56.688204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.689 [2024-07-13 00:33:56.688468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc48a0 is same with the state(5) to be set 00:25:09.689 [2024-07-13 00:33:56.688491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:09.689 [2024-07-13 00:33:56.688499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:09.689 [2024-07-13 00:33:56.688513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54128 len:8 PRP1 0x0 PRP2 0x0 00:25:09.689 [2024-07-13 00:33:56.688523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.689 [2024-07-13 00:33:56.688600] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dc48a0 was disconnected and freed. reset controller. 00:25:09.689 [2024-07-13 00:33:56.688938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.689 [2024-07-13 00:33:56.689074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da65e0 (9): Bad file descriptor 00:25:09.689 [2024-07-13 00:33:56.689221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.689 [2024-07-13 00:33:56.689266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.689 [2024-07-13 00:33:56.689282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da65e0 with addr=10.0.0.2, port=4420 00:25:09.689 [2024-07-13 00:33:56.689292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da65e0 is same with the state(5) to be set 00:25:09.689 [2024-07-13 00:33:56.689310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da65e0 (9): Bad file descriptor 00:25:09.689 [2024-07-13 00:33:56.689326] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:09.689 [2024-07-13 00:33:56.689335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:09.689 [2024-07-13 00:33:56.689347] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:09.689 [2024-07-13 00:33:56.689365] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.689 [2024-07-13 00:33:56.689376] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:09.689 00:33:56 -- host/timeout.sh@128 -- # wait 100419 00:25:11.588 [2024-07-13 00:33:58.689583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.588 [2024-07-13 00:33:58.689737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.588 [2024-07-13 00:33:58.689755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da65e0 with addr=10.0.0.2, port=4420 00:25:11.588 [2024-07-13 00:33:58.689772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da65e0 is same with the state(5) to be set 00:25:11.588 [2024-07-13 00:33:58.689802] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da65e0 (9): Bad file descriptor 00:25:11.588 [2024-07-13 00:33:58.689823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.588 [2024-07-13 00:33:58.689834] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.588 [2024-07-13 00:33:58.689845] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.588 [2024-07-13 00:33:58.689878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.588 [2024-07-13 00:33:58.689890] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.497 [2024-07-13 00:34:00.690118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.497 [2024-07-13 00:34:00.690253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.497 [2024-07-13 00:34:00.690272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da65e0 with addr=10.0.0.2, port=4420 00:25:13.497 [2024-07-13 00:34:00.690288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da65e0 is same with the state(5) to be set 00:25:13.497 [2024-07-13 00:34:00.690318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da65e0 (9): Bad file descriptor 00:25:13.497 [2024-07-13 00:34:00.690339] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.497 [2024-07-13 00:34:00.690350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.497 [2024-07-13 00:34:00.690362] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.497 [2024-07-13 00:34:00.690393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.498 [2024-07-13 00:34:00.690405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.039 [2024-07-13 00:34:02.690480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.039 [2024-07-13 00:34:02.690570] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.039 [2024-07-13 00:34:02.690590] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.039 [2024-07-13 00:34:02.690601] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:16.039 [2024-07-13 00:34:02.690643] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.605 00:25:16.605 Latency(us) 00:25:16.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.605 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:16.605 NVMe0n1 : 8.12 3056.81 11.94 15.76 0.00 41622.96 2606.55 7015926.69 00:25:16.605 =================================================================================================================== 00:25:16.605 Total : 3056.81 11.94 15.76 0.00 41622.96 2606.55 7015926.69 00:25:16.605 0 00:25:16.605 00:34:03 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:16.605 Attaching 5 probes... 00:25:16.605 1272.020484: reset bdev controller NVMe0 00:25:16.605 1272.216063: reconnect bdev controller NVMe0 00:25:16.605 3272.500710: reconnect delay bdev controller NVMe0 00:25:16.605 3272.528625: reconnect bdev controller NVMe0 00:25:16.605 5273.021818: reconnect delay bdev controller NVMe0 00:25:16.605 5273.049481: reconnect bdev controller NVMe0 00:25:16.605 7273.537456: reconnect delay bdev controller NVMe0 00:25:16.605 7273.564647: reconnect bdev controller NVMe0 00:25:16.605 00:34:03 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:16.605 00:34:03 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:16.605 00:34:03 -- host/timeout.sh@136 -- # kill 100368 00:25:16.605 00:34:03 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:16.605 00:34:03 -- host/timeout.sh@139 -- # killprocess 100337 00:25:16.605 00:34:03 -- common/autotest_common.sh@926 -- # '[' -z 100337 ']' 00:25:16.605 00:34:03 -- common/autotest_common.sh@930 -- # kill -0 100337 00:25:16.605 00:34:03 -- common/autotest_common.sh@931 -- # uname 00:25:16.605 00:34:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:16.605 00:34:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100337 00:25:16.605 00:34:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:16.605 00:34:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:16.605 00:34:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100337' 00:25:16.605 killing process with pid 100337 00:25:16.605 00:34:03 -- common/autotest_common.sh@945 -- # kill 100337 00:25:16.605 Received shutdown signal, test time was about 8.177751 seconds 00:25:16.605 00:25:16.605 Latency(us) 00:25:16.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.605 =================================================================================================================== 00:25:16.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.605 00:34:03 -- common/autotest_common.sh@950 -- # wait 100337 00:25:16.863 00:34:04 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.121 00:34:04 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:17.121 00:34:04 -- host/timeout.sh@145 -- # nvmftestfini 00:25:17.121 00:34:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:17.121 00:34:04 -- nvmf/common.sh@116 -- # sync 00:25:17.121 00:34:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:17.121 00:34:04 -- nvmf/common.sh@119 -- # set +e 00:25:17.121 00:34:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:17.121 00:34:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:17.121 rmmod nvme_tcp 00:25:17.121 rmmod nvme_fabrics 00:25:17.380 rmmod nvme_keyring 00:25:17.380 00:34:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:17.380 00:34:04 -- nvmf/common.sh@123 -- # set -e 00:25:17.380 00:34:04 -- nvmf/common.sh@124 -- # return 0 00:25:17.380 00:34:04 -- nvmf/common.sh@477 -- # '[' -n 99754 ']' 00:25:17.380 00:34:04 -- nvmf/common.sh@478 -- # killprocess 99754 00:25:17.380 00:34:04 -- common/autotest_common.sh@926 -- # '[' -z 99754 ']' 00:25:17.380 00:34:04 -- common/autotest_common.sh@930 -- # kill -0 99754 00:25:17.380 00:34:04 -- common/autotest_common.sh@931 -- # uname 00:25:17.380 00:34:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:17.380 00:34:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99754 00:25:17.380 00:34:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:17.380 killing process with pid 99754 00:25:17.380 00:34:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:17.380 00:34:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99754' 00:25:17.380 00:34:04 -- common/autotest_common.sh@945 -- # kill 99754 00:25:17.380 00:34:04 -- common/autotest_common.sh@950 -- # wait 99754 00:25:17.639 00:34:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:17.639 00:34:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:17.639 00:34:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:17.639 00:34:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.639 00:34:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:17.639 00:34:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.639 00:34:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.639 00:34:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.640 00:34:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:17.640 ************************************ 00:25:17.640 END TEST nvmf_timeout 00:25:17.640 ************************************ 00:25:17.640 00:25:17.640 real 0m47.124s 00:25:17.640 user 2m18.188s 00:25:17.640 sys 0m5.182s 00:25:17.640 00:34:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:17.640 00:34:04 -- common/autotest_common.sh@10 -- # set +x 00:25:17.640 00:34:04 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:17.640 00:34:04 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:17.640 00:34:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:17.640 00:34:04 -- common/autotest_common.sh@10 -- # set +x 00:25:17.640 00:34:04 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:17.640 ************************************ 00:25:17.640 END TEST nvmf_tcp 00:25:17.640 ************************************ 00:25:17.640 00:25:17.640 real 17m17.413s 00:25:17.640 user 54m55.536s 00:25:17.640 sys 3m41.950s 00:25:17.640 00:34:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:17.640 00:34:04 -- common/autotest_common.sh@10 -- # set +x 00:25:17.899 00:34:04 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:25:17.899 00:34:04 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:17.899 00:34:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:17.899 00:34:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:17.899 00:34:04 -- common/autotest_common.sh@10 -- # set +x 00:25:17.899 ************************************ 00:25:17.899 START TEST spdkcli_nvmf_tcp 00:25:17.899 ************************************ 00:25:17.899 00:34:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:17.899 * Looking for test storage... 00:25:17.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:17.899 00:34:04 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:17.899 00:34:04 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:17.899 00:34:04 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:17.899 00:34:04 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:17.899 00:34:04 -- nvmf/common.sh@7 -- # uname -s 00:25:17.899 00:34:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.899 00:34:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.899 00:34:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.899 00:34:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.899 00:34:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.899 00:34:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.899 00:34:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.899 00:34:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.899 00:34:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.899 00:34:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.899 00:34:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:25:17.899 00:34:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:25:17.899 00:34:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.899 00:34:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.899 00:34:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:17.899 00:34:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:17.899 00:34:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.899 00:34:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.899 00:34:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.899 00:34:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.899 00:34:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.899 00:34:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.899 00:34:05 -- paths/export.sh@5 -- # export PATH 00:25:17.899 00:34:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.899 00:34:05 -- nvmf/common.sh@46 -- # : 0 00:25:17.899 00:34:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:17.899 00:34:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:17.899 00:34:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:17.899 00:34:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.899 00:34:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.899 00:34:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:17.899 00:34:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:17.899 00:34:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:17.899 00:34:05 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:17.899 00:34:05 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:17.899 00:34:05 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:17.899 00:34:05 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:17.899 00:34:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:17.899 00:34:05 -- common/autotest_common.sh@10 -- # set +x 00:25:17.899 00:34:05 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:17.900 00:34:05 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=100634 00:25:17.900 00:34:05 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:17.900 00:34:05 -- spdkcli/common.sh@34 -- # waitforlisten 100634 00:25:17.900 00:34:05 -- common/autotest_common.sh@819 -- # '[' -z 100634 ']' 00:25:17.900 00:34:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.900 00:34:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:17.900 00:34:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.900 00:34:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:17.900 00:34:05 -- common/autotest_common.sh@10 -- # set +x 00:25:17.900 [2024-07-13 00:34:05.069909] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:17.900 [2024-07-13 00:34:05.070024] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100634 ] 00:25:18.159 [2024-07-13 00:34:05.204296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:18.159 [2024-07-13 00:34:05.304900] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:18.159 [2024-07-13 00:34:05.305526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.159 [2024-07-13 00:34:05.305536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.095 00:34:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:19.095 00:34:06 -- common/autotest_common.sh@852 -- # return 0 00:25:19.095 00:34:06 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:19.095 00:34:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:19.095 00:34:06 -- common/autotest_common.sh@10 -- # set +x 00:25:19.095 00:34:06 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:19.095 00:34:06 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:19.095 00:34:06 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:19.095 00:34:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:19.095 00:34:06 -- common/autotest_common.sh@10 -- # set +x 00:25:19.095 00:34:06 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:19.095 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:19.095 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:19.095 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:19.095 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:19.095 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:19.095 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:19.095 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:19.095 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:19.095 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:19.096 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:19.096 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:19.096 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:19.096 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:19.096 ' 00:25:19.355 [2024-07-13 00:34:06.581492] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:21.887 [2024-07-13 00:34:08.824269] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.271 [2024-07-13 00:34:10.090260] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:25.824 [2024-07-13 00:34:12.437402] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:27.727 [2024-07-13 00:34:14.480178] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:29.105 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:29.105 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:29.105 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:29.105 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:29.105 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:29.105 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:29.105 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:29.105 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:29.105 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:29.105 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:29.105 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:29.105 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:29.105 00:34:16 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:29.105 00:34:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:29.105 00:34:16 -- common/autotest_common.sh@10 -- # set +x 00:25:29.105 00:34:16 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:29.105 00:34:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:29.105 00:34:16 -- common/autotest_common.sh@10 -- # set +x 00:25:29.105 00:34:16 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:29.105 00:34:16 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:29.672 00:34:16 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:29.672 00:34:16 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:29.672 00:34:16 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:29.672 00:34:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:29.672 00:34:16 -- common/autotest_common.sh@10 -- # set +x 00:25:29.672 00:34:16 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:29.672 00:34:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:29.672 00:34:16 -- common/autotest_common.sh@10 -- # set +x 00:25:29.672 00:34:16 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:29.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:29.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:29.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:29.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:29.672 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:29.672 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:29.672 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:29.672 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:29.672 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:29.672 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:29.672 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:29.672 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:29.672 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:29.672 ' 00:25:34.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:34.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:34.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:34.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:34.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:34.967 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:34.967 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:34.967 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:34.967 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:34.967 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:34.967 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:34.967 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:34.967 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:34.967 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:35.227 00:34:22 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:35.227 00:34:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:35.227 00:34:22 -- common/autotest_common.sh@10 -- # set +x 00:25:35.227 00:34:22 -- spdkcli/nvmf.sh@90 -- # killprocess 100634 00:25:35.227 00:34:22 -- common/autotest_common.sh@926 -- # '[' -z 100634 ']' 00:25:35.227 00:34:22 -- common/autotest_common.sh@930 -- # kill -0 100634 00:25:35.227 00:34:22 -- common/autotest_common.sh@931 -- # uname 00:25:35.227 00:34:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:35.227 00:34:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100634 00:25:35.227 00:34:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:35.227 00:34:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:35.227 00:34:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100634' 00:25:35.227 killing process with pid 100634 00:25:35.227 00:34:22 -- common/autotest_common.sh@945 -- # kill 100634 00:25:35.227 [2024-07-13 00:34:22.290230] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:35.227 00:34:22 -- common/autotest_common.sh@950 -- # wait 100634 00:25:35.486 00:34:22 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:35.486 00:34:22 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:35.486 00:34:22 -- spdkcli/common.sh@13 -- # '[' -n 100634 ']' 00:25:35.486 00:34:22 -- spdkcli/common.sh@14 -- # killprocess 100634 00:25:35.486 00:34:22 -- common/autotest_common.sh@926 -- # '[' -z 100634 ']' 00:25:35.486 Process with pid 100634 is not found 00:25:35.486 00:34:22 -- common/autotest_common.sh@930 -- # kill -0 100634 00:25:35.486 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (100634) - No such process 00:25:35.486 00:34:22 -- common/autotest_common.sh@953 -- # echo 'Process with pid 100634 is not found' 00:25:35.486 00:34:22 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:35.486 00:34:22 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:35.486 00:34:22 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:35.486 ************************************ 00:25:35.486 END TEST spdkcli_nvmf_tcp 00:25:35.486 ************************************ 00:25:35.486 00:25:35.486 real 0m17.667s 00:25:35.486 user 0m37.884s 00:25:35.486 sys 0m1.053s 00:25:35.486 00:34:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.486 00:34:22 -- common/autotest_common.sh@10 -- # set +x 00:25:35.486 00:34:22 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:35.486 00:34:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:35.486 00:34:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:35.486 00:34:22 -- common/autotest_common.sh@10 -- # set +x 00:25:35.486 ************************************ 00:25:35.486 START TEST nvmf_identify_passthru 00:25:35.486 ************************************ 00:25:35.486 00:34:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:35.486 * Looking for test storage... 00:25:35.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:35.486 00:34:22 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:35.486 00:34:22 -- nvmf/common.sh@7 -- # uname -s 00:25:35.486 00:34:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.486 00:34:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.486 00:34:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.486 00:34:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.486 00:34:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.486 00:34:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.486 00:34:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.486 00:34:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.486 00:34:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.486 00:34:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.746 00:34:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:25:35.746 00:34:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:25:35.746 00:34:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.746 00:34:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.746 00:34:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:35.746 00:34:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:35.746 00:34:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.746 00:34:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.746 00:34:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.746 00:34:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.746 00:34:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.746 00:34:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.746 00:34:22 -- paths/export.sh@5 -- # export PATH 00:25:35.746 00:34:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.746 00:34:22 -- nvmf/common.sh@46 -- # : 0 00:25:35.746 00:34:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:35.746 00:34:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:35.746 00:34:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:35.746 00:34:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.746 00:34:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.746 00:34:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:35.746 00:34:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:35.746 00:34:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:35.746 00:34:22 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:35.746 00:34:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.746 00:34:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.747 00:34:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.747 00:34:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.747 00:34:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.747 00:34:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.747 00:34:22 -- paths/export.sh@5 -- # export PATH 00:25:35.747 00:34:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.747 00:34:22 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:35.747 00:34:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:35.747 00:34:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.747 00:34:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:35.747 00:34:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:35.747 00:34:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:35.747 00:34:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.747 00:34:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:35.747 00:34:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.747 00:34:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:35.747 00:34:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:35.747 00:34:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:35.747 00:34:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:35.747 00:34:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:35.747 00:34:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:35.747 00:34:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.747 00:34:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.747 00:34:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:35.747 00:34:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:35.747 00:34:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:35.747 00:34:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:35.747 00:34:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:35.747 00:34:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.747 00:34:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:35.747 00:34:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:35.747 00:34:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:35.747 00:34:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:35.747 00:34:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:35.747 00:34:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:35.747 Cannot find device "nvmf_tgt_br" 00:25:35.747 00:34:22 -- nvmf/common.sh@154 -- # true 00:25:35.747 00:34:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:35.747 Cannot find device "nvmf_tgt_br2" 00:25:35.747 00:34:22 -- nvmf/common.sh@155 -- # true 00:25:35.747 00:34:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:35.747 00:34:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:35.747 Cannot find device "nvmf_tgt_br" 00:25:35.747 00:34:22 -- nvmf/common.sh@157 -- # true 00:25:35.747 00:34:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:35.747 Cannot find device "nvmf_tgt_br2" 00:25:35.747 00:34:22 -- nvmf/common.sh@158 -- # true 00:25:35.747 00:34:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:35.747 00:34:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:35.747 00:34:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:35.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:35.747 00:34:22 -- nvmf/common.sh@161 -- # true 00:25:35.747 00:34:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:35.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:35.747 00:34:22 -- nvmf/common.sh@162 -- # true 00:25:35.747 00:34:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:35.747 00:34:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:35.747 00:34:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:35.747 00:34:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:35.747 00:34:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:35.747 00:34:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:35.747 00:34:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:35.747 00:34:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:35.747 00:34:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:35.747 00:34:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:35.747 00:34:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:35.747 00:34:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:35.747 00:34:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:35.747 00:34:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:35.747 00:34:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:35.747 00:34:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:36.004 00:34:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:36.004 00:34:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:36.004 00:34:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:36.004 00:34:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:36.004 00:34:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:36.004 00:34:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:36.004 00:34:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:36.004 00:34:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:36.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:25:36.004 00:25:36.004 --- 10.0.0.2 ping statistics --- 00:25:36.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.005 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:25:36.005 00:34:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:36.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:36.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:25:36.005 00:25:36.005 --- 10.0.0.3 ping statistics --- 00:25:36.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.005 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:36.005 00:34:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:36.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:25:36.005 00:25:36.005 --- 10.0.0.1 ping statistics --- 00:25:36.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.005 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:25:36.005 00:34:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.005 00:34:23 -- nvmf/common.sh@421 -- # return 0 00:25:36.005 00:34:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:36.005 00:34:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.005 00:34:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:36.005 00:34:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:36.005 00:34:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.005 00:34:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:36.005 00:34:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:36.005 00:34:23 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:36.005 00:34:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:36.005 00:34:23 -- common/autotest_common.sh@10 -- # set +x 00:25:36.005 00:34:23 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:36.005 00:34:23 -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:36.005 00:34:23 -- common/autotest_common.sh@1509 -- # local bdfs 00:25:36.005 00:34:23 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:25:36.005 00:34:23 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:25:36.005 00:34:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:36.005 00:34:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:36.005 00:34:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:36.005 00:34:23 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:36.005 00:34:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:36.005 00:34:23 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:36.005 00:34:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:36.005 00:34:23 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:25:36.005 00:34:23 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:36.005 00:34:23 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:36.005 00:34:23 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:36.005 00:34:23 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:36.005 00:34:23 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:36.262 00:34:23 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:36.263 00:34:23 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:36.263 00:34:23 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:36.263 00:34:23 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:36.520 00:34:23 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:36.520 00:34:23 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:36.520 00:34:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:36.520 00:34:23 -- common/autotest_common.sh@10 -- # set +x 00:25:36.520 00:34:23 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:36.520 00:34:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:36.520 00:34:23 -- common/autotest_common.sh@10 -- # set +x 00:25:36.520 00:34:23 -- target/identify_passthru.sh@31 -- # nvmfpid=101137 00:25:36.520 00:34:23 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:36.520 00:34:23 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:36.520 00:34:23 -- target/identify_passthru.sh@35 -- # waitforlisten 101137 00:25:36.520 00:34:23 -- common/autotest_common.sh@819 -- # '[' -z 101137 ']' 00:25:36.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.520 00:34:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.520 00:34:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:36.520 00:34:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.520 00:34:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:36.520 00:34:23 -- common/autotest_common.sh@10 -- # set +x 00:25:36.520 [2024-07-13 00:34:23.606141] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:36.520 [2024-07-13 00:34:23.606244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.779 [2024-07-13 00:34:23.750503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.779 [2024-07-13 00:34:23.863836] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:36.779 [2024-07-13 00:34:23.864357] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.779 [2024-07-13 00:34:23.864509] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.779 [2024-07-13 00:34:23.864687] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.779 [2024-07-13 00:34:23.864948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.779 [2024-07-13 00:34:23.865204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.779 [2024-07-13 00:34:23.865343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.779 [2024-07-13 00:34:23.865358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.714 00:34:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:37.714 00:34:24 -- common/autotest_common.sh@852 -- # return 0 00:25:37.714 00:34:24 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:37.714 00:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.714 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:37.714 00:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.714 00:34:24 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:37.714 00:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.714 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:37.714 [2024-07-13 00:34:24.723008] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:37.714 00:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.714 00:34:24 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.714 00:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.714 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:37.714 [2024-07-13 00:34:24.733592] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.714 00:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.714 00:34:24 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:37.714 00:34:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:37.714 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:37.714 00:34:24 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:37.714 00:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.714 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:37.714 Nvme0n1 00:25:37.714 00:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.714 00:34:24 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:37.714 00:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.714 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:37.714 00:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.714 00:34:24 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:37.714 00:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.714 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:37.714 00:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.714 00:34:24 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.714 00:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.714 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:37.714 [2024-07-13 00:34:24.870040] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.714 00:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.714 00:34:24 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:37.714 00:34:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.714 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:25:37.714 [2024-07-13 00:34:24.877709] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:37.714 [ 00:25:37.714 { 00:25:37.714 "allow_any_host": true, 00:25:37.714 "hosts": [], 00:25:37.714 "listen_addresses": [], 00:25:37.714 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:37.714 "subtype": "Discovery" 00:25:37.714 }, 00:25:37.714 { 00:25:37.714 "allow_any_host": true, 00:25:37.714 "hosts": [], 00:25:37.714 "listen_addresses": [ 00:25:37.714 { 00:25:37.714 "adrfam": "IPv4", 00:25:37.714 "traddr": "10.0.0.2", 00:25:37.714 "transport": "TCP", 00:25:37.714 "trsvcid": "4420", 00:25:37.714 "trtype": "TCP" 00:25:37.714 } 00:25:37.714 ], 00:25:37.714 "max_cntlid": 65519, 00:25:37.714 "max_namespaces": 1, 00:25:37.714 "min_cntlid": 1, 00:25:37.714 "model_number": "SPDK bdev Controller", 00:25:37.714 "namespaces": [ 00:25:37.714 { 00:25:37.714 "bdev_name": "Nvme0n1", 00:25:37.714 "name": "Nvme0n1", 00:25:37.714 "nguid": "5CC3297189D249499CCFFBDF47538F2C", 00:25:37.714 "nsid": 1, 00:25:37.714 "uuid": "5cc32971-89d2-4949-9ccf-fbdf47538f2c" 00:25:37.714 } 00:25:37.714 ], 00:25:37.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:37.714 "serial_number": "SPDK00000000000001", 00:25:37.714 "subtype": "NVMe" 00:25:37.714 } 00:25:37.714 ] 00:25:37.714 00:34:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.714 00:34:24 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:37.714 00:34:24 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:37.714 00:34:24 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:37.972 00:34:25 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:37.972 00:34:25 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:37.972 00:34:25 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:37.972 00:34:25 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:38.231 00:34:25 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:38.231 00:34:25 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:38.231 00:34:25 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:38.231 00:34:25 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.231 00:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.231 00:34:25 -- common/autotest_common.sh@10 -- # set +x 00:25:38.231 00:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.231 00:34:25 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:38.231 00:34:25 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:38.231 00:34:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:38.231 00:34:25 -- nvmf/common.sh@116 -- # sync 00:25:38.231 00:34:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:38.231 00:34:25 -- nvmf/common.sh@119 -- # set +e 00:25:38.231 00:34:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:38.231 00:34:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:38.231 rmmod nvme_tcp 00:25:38.231 rmmod nvme_fabrics 00:25:38.231 rmmod nvme_keyring 00:25:38.231 00:34:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:38.231 00:34:25 -- nvmf/common.sh@123 -- # set -e 00:25:38.231 00:34:25 -- nvmf/common.sh@124 -- # return 0 00:25:38.231 00:34:25 -- nvmf/common.sh@477 -- # '[' -n 101137 ']' 00:25:38.231 00:34:25 -- nvmf/common.sh@478 -- # killprocess 101137 00:25:38.231 00:34:25 -- common/autotest_common.sh@926 -- # '[' -z 101137 ']' 00:25:38.231 00:34:25 -- common/autotest_common.sh@930 -- # kill -0 101137 00:25:38.231 00:34:25 -- common/autotest_common.sh@931 -- # uname 00:25:38.231 00:34:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:38.231 00:34:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101137 00:25:38.489 killing process with pid 101137 00:25:38.489 00:34:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:38.489 00:34:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:38.489 00:34:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101137' 00:25:38.489 00:34:25 -- common/autotest_common.sh@945 -- # kill 101137 00:25:38.489 [2024-07-13 00:34:25.469157] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:38.489 00:34:25 -- common/autotest_common.sh@950 -- # wait 101137 00:25:38.748 00:34:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:38.748 00:34:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:38.748 00:34:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:38.748 00:34:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:38.748 00:34:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:38.748 00:34:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.748 00:34:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:38.748 00:34:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.748 00:34:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:38.748 00:25:38.748 real 0m3.172s 00:25:38.748 user 0m7.904s 00:25:38.748 sys 0m0.833s 00:25:38.748 00:34:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.748 00:34:25 -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 ************************************ 00:25:38.748 END TEST nvmf_identify_passthru 00:25:38.748 ************************************ 00:25:38.748 00:34:25 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:38.748 00:34:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:38.748 00:34:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:38.748 00:34:25 -- common/autotest_common.sh@10 -- # set +x 00:25:38.748 ************************************ 00:25:38.748 START TEST nvmf_dif 00:25:38.748 ************************************ 00:25:38.748 00:34:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:38.748 * Looking for test storage... 00:25:38.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:38.748 00:34:25 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:38.748 00:34:25 -- nvmf/common.sh@7 -- # uname -s 00:25:38.748 00:34:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.748 00:34:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.748 00:34:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.748 00:34:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.748 00:34:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.748 00:34:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.748 00:34:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.748 00:34:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.748 00:34:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.748 00:34:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.748 00:34:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:25:38.748 00:34:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:25:38.748 00:34:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.748 00:34:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.748 00:34:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:38.748 00:34:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:38.748 00:34:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.748 00:34:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.748 00:34:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.748 00:34:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.749 00:34:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.749 00:34:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.749 00:34:25 -- paths/export.sh@5 -- # export PATH 00:25:38.749 00:34:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.749 00:34:25 -- nvmf/common.sh@46 -- # : 0 00:25:38.749 00:34:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:38.749 00:34:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:38.749 00:34:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:38.749 00:34:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.749 00:34:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.749 00:34:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:38.749 00:34:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:38.749 00:34:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:38.749 00:34:25 -- target/dif.sh@15 -- # NULL_META=16 00:25:38.749 00:34:25 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:38.749 00:34:25 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:38.749 00:34:25 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:38.749 00:34:25 -- target/dif.sh@135 -- # nvmftestinit 00:25:38.749 00:34:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:38.749 00:34:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.749 00:34:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:38.749 00:34:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:38.749 00:34:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:38.749 00:34:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.749 00:34:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:38.749 00:34:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.749 00:34:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:38.749 00:34:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:38.749 00:34:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:38.749 00:34:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:38.749 00:34:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:38.749 00:34:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:38.749 00:34:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.749 00:34:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.749 00:34:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:38.749 00:34:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:38.749 00:34:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:38.749 00:34:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:38.749 00:34:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:38.749 00:34:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.749 00:34:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:38.749 00:34:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:38.749 00:34:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:38.749 00:34:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:38.749 00:34:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:38.749 00:34:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:38.749 Cannot find device "nvmf_tgt_br" 00:25:38.749 00:34:25 -- nvmf/common.sh@154 -- # true 00:25:38.749 00:34:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:39.008 Cannot find device "nvmf_tgt_br2" 00:25:39.008 00:34:25 -- nvmf/common.sh@155 -- # true 00:25:39.008 00:34:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:39.008 00:34:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:39.008 Cannot find device "nvmf_tgt_br" 00:25:39.008 00:34:25 -- nvmf/common.sh@157 -- # true 00:25:39.008 00:34:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:39.008 Cannot find device "nvmf_tgt_br2" 00:25:39.008 00:34:26 -- nvmf/common.sh@158 -- # true 00:25:39.008 00:34:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:39.008 00:34:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:39.008 00:34:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:39.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.008 00:34:26 -- nvmf/common.sh@161 -- # true 00:25:39.008 00:34:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:39.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.008 00:34:26 -- nvmf/common.sh@162 -- # true 00:25:39.008 00:34:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:39.008 00:34:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:39.008 00:34:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:39.008 00:34:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:39.008 00:34:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:39.008 00:34:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:39.008 00:34:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:39.008 00:34:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:39.008 00:34:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:39.008 00:34:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:39.008 00:34:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:39.008 00:34:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:39.008 00:34:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:39.008 00:34:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:39.008 00:34:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:39.008 00:34:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:39.008 00:34:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:39.008 00:34:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:39.008 00:34:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:39.008 00:34:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:39.008 00:34:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:39.268 00:34:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:39.268 00:34:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:39.268 00:34:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:39.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:25:39.268 00:25:39.268 --- 10.0.0.2 ping statistics --- 00:25:39.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.268 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:39.268 00:34:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:39.268 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:39.268 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:25:39.268 00:25:39.268 --- 10.0.0.3 ping statistics --- 00:25:39.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.268 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:25:39.268 00:34:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:39.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:25:39.268 00:25:39.268 --- 10.0.0.1 ping statistics --- 00:25:39.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.268 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:25:39.268 00:34:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.268 00:34:26 -- nvmf/common.sh@421 -- # return 0 00:25:39.268 00:34:26 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:39.268 00:34:26 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:39.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:39.527 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:39.527 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:39.527 00:34:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.527 00:34:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:39.527 00:34:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:39.527 00:34:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.527 00:34:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:39.527 00:34:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:39.527 00:34:26 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:39.527 00:34:26 -- target/dif.sh@137 -- # nvmfappstart 00:25:39.527 00:34:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:39.527 00:34:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:39.527 00:34:26 -- common/autotest_common.sh@10 -- # set +x 00:25:39.527 00:34:26 -- nvmf/common.sh@469 -- # nvmfpid=101482 00:25:39.527 00:34:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:39.527 00:34:26 -- nvmf/common.sh@470 -- # waitforlisten 101482 00:25:39.527 00:34:26 -- common/autotest_common.sh@819 -- # '[' -z 101482 ']' 00:25:39.527 00:34:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.527 00:34:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:39.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.527 00:34:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.527 00:34:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:39.527 00:34:26 -- common/autotest_common.sh@10 -- # set +x 00:25:39.527 [2024-07-13 00:34:26.745094] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:39.527 [2024-07-13 00:34:26.745213] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.786 [2024-07-13 00:34:26.888235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.786 [2024-07-13 00:34:27.005217] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:39.786 [2024-07-13 00:34:27.005404] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.786 [2024-07-13 00:34:27.005421] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.786 [2024-07-13 00:34:27.005434] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.786 [2024-07-13 00:34:27.005466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.723 00:34:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:40.723 00:34:27 -- common/autotest_common.sh@852 -- # return 0 00:25:40.723 00:34:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:40.723 00:34:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:40.723 00:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:40.723 00:34:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.723 00:34:27 -- target/dif.sh@139 -- # create_transport 00:25:40.723 00:34:27 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:40.723 00:34:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.723 00:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:40.723 [2024-07-13 00:34:27.772642] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.723 00:34:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.723 00:34:27 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:40.723 00:34:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:40.723 00:34:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.723 00:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:40.723 ************************************ 00:25:40.723 START TEST fio_dif_1_default 00:25:40.723 ************************************ 00:25:40.723 00:34:27 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:25:40.723 00:34:27 -- target/dif.sh@86 -- # create_subsystems 0 00:25:40.723 00:34:27 -- target/dif.sh@28 -- # local sub 00:25:40.723 00:34:27 -- target/dif.sh@30 -- # for sub in "$@" 00:25:40.723 00:34:27 -- target/dif.sh@31 -- # create_subsystem 0 00:25:40.723 00:34:27 -- target/dif.sh@18 -- # local sub_id=0 00:25:40.723 00:34:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:40.723 00:34:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.723 00:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:40.723 bdev_null0 00:25:40.723 00:34:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.723 00:34:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:40.723 00:34:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.723 00:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:40.723 00:34:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.723 00:34:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:40.723 00:34:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.723 00:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:40.723 00:34:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.723 00:34:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:40.723 00:34:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.723 00:34:27 -- common/autotest_common.sh@10 -- # set +x 00:25:40.723 [2024-07-13 00:34:27.816789] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.723 00:34:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.723 00:34:27 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:40.723 00:34:27 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:40.723 00:34:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:40.723 00:34:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:40.723 00:34:27 -- target/dif.sh@82 -- # gen_fio_conf 00:25:40.723 00:34:27 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:40.723 00:34:27 -- target/dif.sh@54 -- # local file 00:25:40.723 00:34:27 -- target/dif.sh@56 -- # cat 00:25:40.723 00:34:27 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:40.723 00:34:27 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:40.723 00:34:27 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:40.723 00:34:27 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:40.723 00:34:27 -- common/autotest_common.sh@1320 -- # shift 00:25:40.723 00:34:27 -- nvmf/common.sh@520 -- # config=() 00:25:40.723 00:34:27 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:40.723 00:34:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:40.723 00:34:27 -- nvmf/common.sh@520 -- # local subsystem config 00:25:40.723 00:34:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:40.723 00:34:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:40.723 { 00:25:40.723 "params": { 00:25:40.723 "name": "Nvme$subsystem", 00:25:40.723 "trtype": "$TEST_TRANSPORT", 00:25:40.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:40.723 "adrfam": "ipv4", 00:25:40.723 "trsvcid": "$NVMF_PORT", 00:25:40.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:40.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:40.723 "hdgst": ${hdgst:-false}, 00:25:40.723 "ddgst": ${ddgst:-false} 00:25:40.723 }, 00:25:40.723 "method": "bdev_nvme_attach_controller" 00:25:40.723 } 00:25:40.723 EOF 00:25:40.723 )") 00:25:40.723 00:34:27 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:40.723 00:34:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:40.723 00:34:27 -- target/dif.sh@72 -- # (( file <= files )) 00:25:40.723 00:34:27 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:40.723 00:34:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:40.723 00:34:27 -- nvmf/common.sh@542 -- # cat 00:25:40.723 00:34:27 -- nvmf/common.sh@544 -- # jq . 00:25:40.723 00:34:27 -- nvmf/common.sh@545 -- # IFS=, 00:25:40.723 00:34:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:40.723 "params": { 00:25:40.723 "name": "Nvme0", 00:25:40.723 "trtype": "tcp", 00:25:40.723 "traddr": "10.0.0.2", 00:25:40.723 "adrfam": "ipv4", 00:25:40.723 "trsvcid": "4420", 00:25:40.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:40.723 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:40.723 "hdgst": false, 00:25:40.723 "ddgst": false 00:25:40.723 }, 00:25:40.723 "method": "bdev_nvme_attach_controller" 00:25:40.723 }' 00:25:40.723 00:34:27 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:40.723 00:34:27 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:40.724 00:34:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:40.724 00:34:27 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:40.724 00:34:27 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:40.724 00:34:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:40.724 00:34:27 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:40.724 00:34:27 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:40.724 00:34:27 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:40.724 00:34:27 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:40.982 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:40.982 fio-3.35 00:25:40.982 Starting 1 thread 00:25:41.241 [2024-07-13 00:34:28.447395] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:41.241 [2024-07-13 00:34:28.447466] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:53.446 00:25:53.446 filename0: (groupid=0, jobs=1): err= 0: pid=101565: Sat Jul 13 00:34:38 2024 00:25:53.446 read: IOPS=2316, BW=9267KiB/s (9489kB/s)(90.8MiB/10035msec) 00:25:53.446 slat (nsec): min=6373, max=49641, avg=7843.89, stdev=3166.49 00:25:53.446 clat (usec): min=368, max=42462, avg=1703.26, stdev=7071.90 00:25:53.446 lat (usec): min=375, max=42471, avg=1711.11, stdev=7071.94 00:25:53.446 clat percentiles (usec): 00:25:53.446 | 1.00th=[ 375], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 396], 00:25:53.446 | 30.00th=[ 408], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 433], 00:25:53.446 | 70.00th=[ 445], 80.00th=[ 457], 90.00th=[ 482], 95.00th=[ 515], 00:25:53.446 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:25:53.446 | 99.99th=[42206] 00:25:53.446 bw ( KiB/s): min= 5888, max=17376, per=100.00%, avg=9296.85, stdev=2641.54, samples=20 00:25:53.446 iops : min= 1472, max= 4344, avg=2324.20, stdev=660.40, samples=20 00:25:53.446 lat (usec) : 500=93.62%, 750=3.18% 00:25:53.446 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02%, 50=3.15% 00:25:53.446 cpu : usr=90.89%, sys=8.03%, ctx=25, majf=0, minf=0 00:25:53.446 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:53.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.446 issued rwts: total=23248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.446 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:53.446 00:25:53.446 Run status group 0 (all jobs): 00:25:53.446 READ: bw=9267KiB/s (9489kB/s), 9267KiB/s-9267KiB/s (9489kB/s-9489kB/s), io=90.8MiB (95.2MB), run=10035-10035msec 00:25:53.446 00:34:38 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:53.446 00:34:38 -- target/dif.sh@43 -- # local sub 00:25:53.446 00:34:38 -- target/dif.sh@45 -- # for sub in "$@" 00:25:53.446 00:34:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:53.446 00:34:38 -- target/dif.sh@36 -- # local sub_id=0 00:25:53.446 00:34:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:53.446 00:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.446 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.446 00:34:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.446 00:34:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:53.446 00:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.446 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.446 00:34:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.446 00:25:53.446 real 0m11.043s 00:25:53.446 user 0m9.785s 00:25:53.446 sys 0m1.070s 00:25:53.446 00:34:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.446 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.446 ************************************ 00:25:53.446 END TEST fio_dif_1_default 00:25:53.446 ************************************ 00:25:53.446 00:34:38 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:53.446 00:34:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:53.446 00:34:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:53.446 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.446 ************************************ 00:25:53.446 START TEST fio_dif_1_multi_subsystems 00:25:53.446 ************************************ 00:25:53.446 00:34:38 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:25:53.446 00:34:38 -- target/dif.sh@92 -- # local files=1 00:25:53.446 00:34:38 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:53.446 00:34:38 -- target/dif.sh@28 -- # local sub 00:25:53.446 00:34:38 -- target/dif.sh@30 -- # for sub in "$@" 00:25:53.446 00:34:38 -- target/dif.sh@31 -- # create_subsystem 0 00:25:53.446 00:34:38 -- target/dif.sh@18 -- # local sub_id=0 00:25:53.446 00:34:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:53.446 00:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.446 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.446 bdev_null0 00:25:53.446 00:34:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.446 00:34:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:53.447 00:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.447 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.447 00:34:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.447 00:34:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:53.447 00:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.447 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.447 00:34:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.447 00:34:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.447 00:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.447 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.447 [2024-07-13 00:34:38.910075] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.447 00:34:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.447 00:34:38 -- target/dif.sh@30 -- # for sub in "$@" 00:25:53.447 00:34:38 -- target/dif.sh@31 -- # create_subsystem 1 00:25:53.447 00:34:38 -- target/dif.sh@18 -- # local sub_id=1 00:25:53.447 00:34:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:53.447 00:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.447 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.447 bdev_null1 00:25:53.447 00:34:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.447 00:34:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:53.447 00:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.447 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.447 00:34:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.447 00:34:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:53.447 00:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.447 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.447 00:34:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.447 00:34:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.447 00:34:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.447 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:25:53.447 00:34:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.447 00:34:38 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:53.447 00:34:38 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:53.447 00:34:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:53.447 00:34:38 -- nvmf/common.sh@520 -- # config=() 00:25:53.447 00:34:38 -- nvmf/common.sh@520 -- # local subsystem config 00:25:53.447 00:34:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:53.447 00:34:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.447 00:34:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:53.447 { 00:25:53.447 "params": { 00:25:53.447 "name": "Nvme$subsystem", 00:25:53.447 "trtype": "$TEST_TRANSPORT", 00:25:53.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.447 "adrfam": "ipv4", 00:25:53.447 "trsvcid": "$NVMF_PORT", 00:25:53.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.447 "hdgst": ${hdgst:-false}, 00:25:53.447 "ddgst": ${ddgst:-false} 00:25:53.447 }, 00:25:53.447 "method": "bdev_nvme_attach_controller" 00:25:53.447 } 00:25:53.447 EOF 00:25:53.447 )") 00:25:53.447 00:34:38 -- target/dif.sh@82 -- # gen_fio_conf 00:25:53.447 00:34:38 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.447 00:34:38 -- target/dif.sh@54 -- # local file 00:25:53.447 00:34:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:53.447 00:34:38 -- target/dif.sh@56 -- # cat 00:25:53.447 00:34:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:53.447 00:34:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:53.447 00:34:38 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:53.447 00:34:38 -- common/autotest_common.sh@1320 -- # shift 00:25:53.447 00:34:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:53.447 00:34:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.447 00:34:38 -- nvmf/common.sh@542 -- # cat 00:25:53.447 00:34:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:53.447 00:34:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:53.447 00:34:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:53.447 00:34:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:53.447 00:34:38 -- target/dif.sh@72 -- # (( file <= files )) 00:25:53.447 00:34:38 -- target/dif.sh@73 -- # cat 00:25:53.447 00:34:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:53.447 00:34:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:53.447 { 00:25:53.447 "params": { 00:25:53.447 "name": "Nvme$subsystem", 00:25:53.447 "trtype": "$TEST_TRANSPORT", 00:25:53.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.447 "adrfam": "ipv4", 00:25:53.447 "trsvcid": "$NVMF_PORT", 00:25:53.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.447 "hdgst": ${hdgst:-false}, 00:25:53.447 "ddgst": ${ddgst:-false} 00:25:53.447 }, 00:25:53.447 "method": "bdev_nvme_attach_controller" 00:25:53.447 } 00:25:53.447 EOF 00:25:53.447 )") 00:25:53.447 00:34:38 -- nvmf/common.sh@542 -- # cat 00:25:53.447 00:34:38 -- target/dif.sh@72 -- # (( file++ )) 00:25:53.447 00:34:38 -- target/dif.sh@72 -- # (( file <= files )) 00:25:53.447 00:34:38 -- nvmf/common.sh@544 -- # jq . 00:25:53.447 00:34:38 -- nvmf/common.sh@545 -- # IFS=, 00:25:53.447 00:34:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:53.447 "params": { 00:25:53.447 "name": "Nvme0", 00:25:53.447 "trtype": "tcp", 00:25:53.447 "traddr": "10.0.0.2", 00:25:53.447 "adrfam": "ipv4", 00:25:53.447 "trsvcid": "4420", 00:25:53.447 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:53.447 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:53.447 "hdgst": false, 00:25:53.447 "ddgst": false 00:25:53.447 }, 00:25:53.447 "method": "bdev_nvme_attach_controller" 00:25:53.447 },{ 00:25:53.447 "params": { 00:25:53.447 "name": "Nvme1", 00:25:53.447 "trtype": "tcp", 00:25:53.447 "traddr": "10.0.0.2", 00:25:53.447 "adrfam": "ipv4", 00:25:53.447 "trsvcid": "4420", 00:25:53.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:53.447 "hdgst": false, 00:25:53.447 "ddgst": false 00:25:53.447 }, 00:25:53.447 "method": "bdev_nvme_attach_controller" 00:25:53.447 }' 00:25:53.447 00:34:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:53.447 00:34:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:53.447 00:34:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.447 00:34:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:53.447 00:34:38 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:53.447 00:34:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:53.447 00:34:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:53.447 00:34:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:53.447 00:34:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:53.447 00:34:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.447 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:53.447 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:53.447 fio-3.35 00:25:53.447 Starting 2 threads 00:25:53.447 [2024-07-13 00:34:39.681714] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:53.447 [2024-07-13 00:34:39.681799] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:03.420 00:26:03.420 filename0: (groupid=0, jobs=1): err= 0: pid=101724: Sat Jul 13 00:34:49 2024 00:26:03.420 read: IOPS=215, BW=862KiB/s (883kB/s)(8640KiB/10018msec) 00:26:03.420 slat (nsec): min=6396, max=43610, avg=10144.03, stdev=6335.78 00:26:03.420 clat (usec): min=383, max=41529, avg=18518.90, stdev=20118.60 00:26:03.420 lat (usec): min=390, max=41548, avg=18529.05, stdev=20118.44 00:26:03.420 clat percentiles (usec): 00:26:03.420 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 420], 20.00th=[ 433], 00:26:03.420 | 30.00th=[ 445], 40.00th=[ 465], 50.00th=[ 515], 60.00th=[40633], 00:26:03.420 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:03.420 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:03.420 | 99.99th=[41681] 00:26:03.420 bw ( KiB/s): min= 576, max= 1184, per=53.49%, avg=862.30, stdev=193.27, samples=20 00:26:03.420 iops : min= 144, max= 296, avg=215.55, stdev=48.32, samples=20 00:26:03.420 lat (usec) : 500=47.92%, 750=6.48%, 1000=0.74% 00:26:03.420 lat (msec) : 2=0.23%, 50=44.63% 00:26:03.420 cpu : usr=97.42%, sys=2.16%, ctx=15, majf=0, minf=0 00:26:03.420 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.420 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.420 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:03.420 filename1: (groupid=0, jobs=1): err= 0: pid=101725: Sat Jul 13 00:34:49 2024 00:26:03.420 read: IOPS=187, BW=749KiB/s (767kB/s)(7504KiB/10015msec) 00:26:03.420 slat (nsec): min=6088, max=79550, avg=9827.95, stdev=5788.25 00:26:03.420 clat (usec): min=372, max=41512, avg=21320.97, stdev=20219.32 00:26:03.420 lat (usec): min=379, max=41522, avg=21330.80, stdev=20219.32 00:26:03.420 clat percentiles (usec): 00:26:03.420 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 412], 20.00th=[ 424], 00:26:03.420 | 30.00th=[ 437], 40.00th=[ 461], 50.00th=[40633], 60.00th=[40633], 00:26:03.420 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:03.420 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:03.420 | 99.99th=[41681] 00:26:03.420 bw ( KiB/s): min= 384, max= 1312, per=46.42%, avg=748.70, stdev=198.20, samples=20 00:26:03.420 iops : min= 96, max= 328, avg=187.15, stdev=49.55, samples=20 00:26:03.420 lat (usec) : 500=44.40%, 750=1.92%, 1000=2.08% 00:26:03.420 lat (msec) : 50=51.60% 00:26:03.420 cpu : usr=97.53%, sys=2.05%, ctx=7, majf=0, minf=0 00:26:03.420 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.420 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.420 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:03.420 00:26:03.420 Run status group 0 (all jobs): 00:26:03.420 READ: bw=1611KiB/s (1650kB/s), 749KiB/s-862KiB/s (767kB/s-883kB/s), io=15.8MiB (16.5MB), run=10015-10018msec 00:26:03.420 00:34:50 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:03.420 00:34:50 -- target/dif.sh@43 -- # local sub 00:26:03.420 00:34:50 -- target/dif.sh@45 -- # for sub in "$@" 00:26:03.420 00:34:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:03.420 00:34:50 -- target/dif.sh@36 -- # local sub_id=0 00:26:03.420 00:34:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:03.420 00:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.420 00:34:50 -- common/autotest_common.sh@10 -- # set +x 00:26:03.420 00:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.420 00:34:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:03.420 00:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.421 00:34:50 -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 00:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.421 00:34:50 -- target/dif.sh@45 -- # for sub in "$@" 00:26:03.421 00:34:50 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:03.421 00:34:50 -- target/dif.sh@36 -- # local sub_id=1 00:26:03.421 00:34:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:03.421 00:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.421 00:34:50 -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 00:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.421 00:34:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:03.421 00:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.421 00:34:50 -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 00:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.421 00:26:03.421 real 0m11.239s 00:26:03.421 user 0m20.358s 00:26:03.421 sys 0m0.725s 00:26:03.421 00:34:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.421 ************************************ 00:26:03.421 END TEST fio_dif_1_multi_subsystems 00:26:03.421 ************************************ 00:26:03.421 00:34:50 -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 00:34:50 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:03.421 00:34:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:03.421 00:34:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:03.421 00:34:50 -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 ************************************ 00:26:03.421 START TEST fio_dif_rand_params 00:26:03.421 ************************************ 00:26:03.421 00:34:50 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:26:03.421 00:34:50 -- target/dif.sh@100 -- # local NULL_DIF 00:26:03.421 00:34:50 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:03.421 00:34:50 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:03.421 00:34:50 -- target/dif.sh@103 -- # bs=128k 00:26:03.421 00:34:50 -- target/dif.sh@103 -- # numjobs=3 00:26:03.421 00:34:50 -- target/dif.sh@103 -- # iodepth=3 00:26:03.421 00:34:50 -- target/dif.sh@103 -- # runtime=5 00:26:03.421 00:34:50 -- target/dif.sh@105 -- # create_subsystems 0 00:26:03.421 00:34:50 -- target/dif.sh@28 -- # local sub 00:26:03.421 00:34:50 -- target/dif.sh@30 -- # for sub in "$@" 00:26:03.421 00:34:50 -- target/dif.sh@31 -- # create_subsystem 0 00:26:03.421 00:34:50 -- target/dif.sh@18 -- # local sub_id=0 00:26:03.421 00:34:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:03.421 00:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.421 00:34:50 -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 bdev_null0 00:26:03.421 00:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.421 00:34:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:03.421 00:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.421 00:34:50 -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 00:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.421 00:34:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:03.421 00:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.421 00:34:50 -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 00:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.421 00:34:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:03.421 00:34:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.421 00:34:50 -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 [2024-07-13 00:34:50.211545] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.421 00:34:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.421 00:34:50 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:03.421 00:34:50 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:03.421 00:34:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:03.421 00:34:50 -- nvmf/common.sh@520 -- # config=() 00:26:03.421 00:34:50 -- nvmf/common.sh@520 -- # local subsystem config 00:26:03.421 00:34:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.421 00:34:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:03.421 00:34:50 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.421 00:34:50 -- target/dif.sh@82 -- # gen_fio_conf 00:26:03.421 00:34:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:03.421 { 00:26:03.421 "params": { 00:26:03.421 "name": "Nvme$subsystem", 00:26:03.421 "trtype": "$TEST_TRANSPORT", 00:26:03.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:03.421 "adrfam": "ipv4", 00:26:03.421 "trsvcid": "$NVMF_PORT", 00:26:03.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:03.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:03.421 "hdgst": ${hdgst:-false}, 00:26:03.421 "ddgst": ${ddgst:-false} 00:26:03.421 }, 00:26:03.421 "method": "bdev_nvme_attach_controller" 00:26:03.421 } 00:26:03.421 EOF 00:26:03.421 )") 00:26:03.421 00:34:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:03.421 00:34:50 -- target/dif.sh@54 -- # local file 00:26:03.421 00:34:50 -- target/dif.sh@56 -- # cat 00:26:03.421 00:34:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:03.421 00:34:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:03.421 00:34:50 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:03.421 00:34:50 -- common/autotest_common.sh@1320 -- # shift 00:26:03.421 00:34:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:03.421 00:34:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.421 00:34:50 -- nvmf/common.sh@542 -- # cat 00:26:03.421 00:34:50 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:03.421 00:34:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:03.421 00:34:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:03.421 00:34:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:03.421 00:34:50 -- target/dif.sh@72 -- # (( file <= files )) 00:26:03.421 00:34:50 -- nvmf/common.sh@544 -- # jq . 00:26:03.421 00:34:50 -- nvmf/common.sh@545 -- # IFS=, 00:26:03.421 00:34:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:03.421 "params": { 00:26:03.421 "name": "Nvme0", 00:26:03.421 "trtype": "tcp", 00:26:03.421 "traddr": "10.0.0.2", 00:26:03.421 "adrfam": "ipv4", 00:26:03.421 "trsvcid": "4420", 00:26:03.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:03.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:03.421 "hdgst": false, 00:26:03.421 "ddgst": false 00:26:03.421 }, 00:26:03.421 "method": "bdev_nvme_attach_controller" 00:26:03.421 }' 00:26:03.421 00:34:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:03.421 00:34:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:03.421 00:34:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.421 00:34:50 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:03.421 00:34:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:03.421 00:34:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:03.421 00:34:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:03.421 00:34:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:03.421 00:34:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:03.421 00:34:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.421 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:03.421 ... 00:26:03.421 fio-3.35 00:26:03.421 Starting 3 threads 00:26:03.680 [2024-07-13 00:34:50.888658] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:03.680 [2024-07-13 00:34:50.888731] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:08.968 00:26:08.968 filename0: (groupid=0, jobs=1): err= 0: pid=101881: Sat Jul 13 00:34:56 2024 00:26:08.968 read: IOPS=188, BW=23.6MiB/s (24.8MB/s)(119MiB/5016msec) 00:26:08.968 slat (nsec): min=6390, max=57759, avg=14377.18, stdev=7296.99 00:26:08.968 clat (usec): min=5430, max=52076, avg=15844.70, stdev=15422.06 00:26:08.968 lat (usec): min=5450, max=52084, avg=15859.08, stdev=15421.92 00:26:08.968 clat percentiles (usec): 00:26:08.968 | 1.00th=[ 5604], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7701], 00:26:08.968 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:26:08.968 | 70.00th=[10028], 80.00th=[10683], 90.00th=[49546], 95.00th=[50070], 00:26:08.968 | 99.00th=[51119], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:26:08.968 | 99.99th=[52167] 00:26:08.968 bw ( KiB/s): min=19200, max=31744, per=23.20%, avg=24192.00, stdev=4002.94, samples=10 00:26:08.968 iops : min= 150, max= 248, avg=189.00, stdev=31.27, samples=10 00:26:08.968 lat (msec) : 10=69.41%, 20=13.19%, 50=10.86%, 100=6.54% 00:26:08.968 cpu : usr=96.23%, sys=2.77%, ctx=6, majf=0, minf=0 00:26:08.968 IO depths : 1=9.9%, 2=90.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:08.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.968 issued rwts: total=948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.968 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:08.968 filename0: (groupid=0, jobs=1): err= 0: pid=101882: Sat Jul 13 00:34:56 2024 00:26:08.968 read: IOPS=351, BW=43.9MiB/s (46.1MB/s)(220MiB/5001msec) 00:26:08.968 slat (nsec): min=6601, max=60776, avg=9876.50, stdev=5061.36 00:26:08.968 clat (usec): min=3690, max=52226, avg=8509.58, stdev=3557.69 00:26:08.968 lat (usec): min=3696, max=52234, avg=8519.46, stdev=3558.23 00:26:08.968 clat percentiles (usec): 00:26:08.968 | 1.00th=[ 3720], 5.00th=[ 3752], 10.00th=[ 3818], 20.00th=[ 3949], 00:26:08.968 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[ 8848], 00:26:08.968 | 70.00th=[10552], 80.00th=[11863], 90.00th=[12518], 95.00th=[12911], 00:26:08.968 | 99.00th=[13698], 99.50th=[14353], 99.90th=[52167], 99.95th=[52167], 00:26:08.968 | 99.99th=[52167] 00:26:08.968 bw ( KiB/s): min=39168, max=50688, per=42.72%, avg=44544.00, stdev=3498.41, samples=9 00:26:08.968 iops : min= 306, max= 396, avg=348.00, stdev=27.33, samples=9 00:26:08.968 lat (msec) : 4=20.42%, 10=47.33%, 20=32.08%, 100=0.17% 00:26:08.968 cpu : usr=93.18%, sys=4.98%, ctx=12, majf=0, minf=0 00:26:08.968 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:08.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.968 issued rwts: total=1758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.968 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:08.968 filename0: (groupid=0, jobs=1): err= 0: pid=101884: Sat Jul 13 00:34:56 2024 00:26:08.968 read: IOPS=275, BW=34.5MiB/s (36.2MB/s)(173MiB/5002msec) 00:26:08.968 slat (nsec): min=6719, max=64395, avg=12758.06, stdev=6070.10 00:26:08.968 clat (usec): min=3386, max=53142, avg=10854.96, stdev=9496.24 00:26:08.968 lat (usec): min=3410, max=53153, avg=10867.72, stdev=9496.42 00:26:08.968 clat percentiles (usec): 00:26:08.968 | 1.00th=[ 3982], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6652], 00:26:08.968 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 8225], 60.00th=[10028], 00:26:08.968 | 70.00th=[10814], 80.00th=[11338], 90.00th=[12256], 95.00th=[46924], 00:26:08.968 | 99.00th=[51119], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:26:08.968 | 99.99th=[53216] 00:26:08.968 bw ( KiB/s): min=27904, max=45824, per=33.80%, avg=35242.67, stdev=5524.80, samples=9 00:26:08.968 iops : min= 218, max= 358, avg=275.33, stdev=43.16, samples=9 00:26:08.968 lat (msec) : 4=1.09%, 10=58.04%, 20=35.43%, 50=3.62%, 100=1.81% 00:26:08.968 cpu : usr=94.14%, sys=4.22%, ctx=9, majf=0, minf=0 00:26:08.968 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:08.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:08.968 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:08.968 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:08.968 00:26:08.968 Run status group 0 (all jobs): 00:26:08.968 READ: bw=102MiB/s (107MB/s), 23.6MiB/s-43.9MiB/s (24.8MB/s-46.1MB/s), io=511MiB (536MB), run=5001-5016msec 00:26:09.226 00:34:56 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:09.226 00:34:56 -- target/dif.sh@43 -- # local sub 00:26:09.226 00:34:56 -- target/dif.sh@45 -- # for sub in "$@" 00:26:09.226 00:34:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:09.226 00:34:56 -- target/dif.sh@36 -- # local sub_id=0 00:26:09.226 00:34:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:09.226 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.226 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.226 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.226 00:34:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:09.226 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.226 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.226 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.226 00:34:56 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:09.226 00:34:56 -- target/dif.sh@109 -- # bs=4k 00:26:09.226 00:34:56 -- target/dif.sh@109 -- # numjobs=8 00:26:09.226 00:34:56 -- target/dif.sh@109 -- # iodepth=16 00:26:09.226 00:34:56 -- target/dif.sh@109 -- # runtime= 00:26:09.226 00:34:56 -- target/dif.sh@109 -- # files=2 00:26:09.226 00:34:56 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:09.226 00:34:56 -- target/dif.sh@28 -- # local sub 00:26:09.226 00:34:56 -- target/dif.sh@30 -- # for sub in "$@" 00:26:09.226 00:34:56 -- target/dif.sh@31 -- # create_subsystem 0 00:26:09.226 00:34:56 -- target/dif.sh@18 -- # local sub_id=0 00:26:09.226 00:34:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:09.226 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.226 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.226 bdev_null0 00:26:09.226 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.226 00:34:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:09.226 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.226 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.226 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.226 00:34:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:09.226 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.226 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.226 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.226 00:34:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:09.226 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.226 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 [2024-07-13 00:34:56.286847] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.227 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.227 00:34:56 -- target/dif.sh@30 -- # for sub in "$@" 00:26:09.227 00:34:56 -- target/dif.sh@31 -- # create_subsystem 1 00:26:09.227 00:34:56 -- target/dif.sh@18 -- # local sub_id=1 00:26:09.227 00:34:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:09.227 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.227 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 bdev_null1 00:26:09.227 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.227 00:34:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:09.227 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.227 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.227 00:34:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:09.227 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.227 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.227 00:34:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.227 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.227 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.227 00:34:56 -- target/dif.sh@30 -- # for sub in "$@" 00:26:09.227 00:34:56 -- target/dif.sh@31 -- # create_subsystem 2 00:26:09.227 00:34:56 -- target/dif.sh@18 -- # local sub_id=2 00:26:09.227 00:34:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:09.227 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.227 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 bdev_null2 00:26:09.227 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.227 00:34:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:09.227 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.227 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.227 00:34:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:09.227 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.227 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.227 00:34:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:09.227 00:34:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.227 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:26:09.227 00:34:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.227 00:34:56 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:09.227 00:34:56 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:09.227 00:34:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:09.227 00:34:56 -- nvmf/common.sh@520 -- # config=() 00:26:09.227 00:34:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.227 00:34:56 -- nvmf/common.sh@520 -- # local subsystem config 00:26:09.227 00:34:56 -- target/dif.sh@82 -- # gen_fio_conf 00:26:09.227 00:34:56 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.227 00:34:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:09.227 00:34:56 -- target/dif.sh@54 -- # local file 00:26:09.227 00:34:56 -- target/dif.sh@56 -- # cat 00:26:09.227 00:34:56 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:09.227 00:34:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:09.227 { 00:26:09.227 "params": { 00:26:09.227 "name": "Nvme$subsystem", 00:26:09.227 "trtype": "$TEST_TRANSPORT", 00:26:09.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:09.227 "adrfam": "ipv4", 00:26:09.227 "trsvcid": "$NVMF_PORT", 00:26:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:09.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:09.227 "hdgst": ${hdgst:-false}, 00:26:09.227 "ddgst": ${ddgst:-false} 00:26:09.227 }, 00:26:09.227 "method": "bdev_nvme_attach_controller" 00:26:09.227 } 00:26:09.227 EOF 00:26:09.227 )") 00:26:09.227 00:34:56 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:09.227 00:34:56 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:09.227 00:34:56 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:09.227 00:34:56 -- common/autotest_common.sh@1320 -- # shift 00:26:09.227 00:34:56 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:09.227 00:34:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:09.227 00:34:56 -- nvmf/common.sh@542 -- # cat 00:26:09.227 00:34:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:09.227 00:34:56 -- target/dif.sh@72 -- # (( file <= files )) 00:26:09.227 00:34:56 -- target/dif.sh@73 -- # cat 00:26:09.227 00:34:56 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:09.227 00:34:56 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:09.227 00:34:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:09.227 00:34:56 -- target/dif.sh@72 -- # (( file++ )) 00:26:09.227 00:34:56 -- target/dif.sh@72 -- # (( file <= files )) 00:26:09.227 00:34:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:09.227 00:34:56 -- target/dif.sh@73 -- # cat 00:26:09.227 00:34:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:09.227 { 00:26:09.227 "params": { 00:26:09.227 "name": "Nvme$subsystem", 00:26:09.227 "trtype": "$TEST_TRANSPORT", 00:26:09.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:09.227 "adrfam": "ipv4", 00:26:09.227 "trsvcid": "$NVMF_PORT", 00:26:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:09.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:09.227 "hdgst": ${hdgst:-false}, 00:26:09.227 "ddgst": ${ddgst:-false} 00:26:09.227 }, 00:26:09.227 "method": "bdev_nvme_attach_controller" 00:26:09.227 } 00:26:09.227 EOF 00:26:09.227 )") 00:26:09.227 00:34:56 -- nvmf/common.sh@542 -- # cat 00:26:09.227 00:34:56 -- target/dif.sh@72 -- # (( file++ )) 00:26:09.227 00:34:56 -- target/dif.sh@72 -- # (( file <= files )) 00:26:09.227 00:34:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:09.227 00:34:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:09.227 { 00:26:09.227 "params": { 00:26:09.227 "name": "Nvme$subsystem", 00:26:09.227 "trtype": "$TEST_TRANSPORT", 00:26:09.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:09.227 "adrfam": "ipv4", 00:26:09.227 "trsvcid": "$NVMF_PORT", 00:26:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:09.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:09.227 "hdgst": ${hdgst:-false}, 00:26:09.227 "ddgst": ${ddgst:-false} 00:26:09.227 }, 00:26:09.227 "method": "bdev_nvme_attach_controller" 00:26:09.227 } 00:26:09.227 EOF 00:26:09.227 )") 00:26:09.227 00:34:56 -- nvmf/common.sh@542 -- # cat 00:26:09.227 00:34:56 -- nvmf/common.sh@544 -- # jq . 00:26:09.227 00:34:56 -- nvmf/common.sh@545 -- # IFS=, 00:26:09.227 00:34:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:09.227 "params": { 00:26:09.227 "name": "Nvme0", 00:26:09.227 "trtype": "tcp", 00:26:09.227 "traddr": "10.0.0.2", 00:26:09.227 "adrfam": "ipv4", 00:26:09.227 "trsvcid": "4420", 00:26:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:09.227 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:09.227 "hdgst": false, 00:26:09.227 "ddgst": false 00:26:09.227 }, 00:26:09.227 "method": "bdev_nvme_attach_controller" 00:26:09.227 },{ 00:26:09.227 "params": { 00:26:09.227 "name": "Nvme1", 00:26:09.227 "trtype": "tcp", 00:26:09.227 "traddr": "10.0.0.2", 00:26:09.227 "adrfam": "ipv4", 00:26:09.227 "trsvcid": "4420", 00:26:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:09.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:09.227 "hdgst": false, 00:26:09.227 "ddgst": false 00:26:09.227 }, 00:26:09.227 "method": "bdev_nvme_attach_controller" 00:26:09.227 },{ 00:26:09.227 "params": { 00:26:09.227 "name": "Nvme2", 00:26:09.227 "trtype": "tcp", 00:26:09.227 "traddr": "10.0.0.2", 00:26:09.227 "adrfam": "ipv4", 00:26:09.227 "trsvcid": "4420", 00:26:09.227 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:09.227 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:09.227 "hdgst": false, 00:26:09.227 "ddgst": false 00:26:09.227 }, 00:26:09.227 "method": "bdev_nvme_attach_controller" 00:26:09.227 }' 00:26:09.227 00:34:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:09.227 00:34:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:09.227 00:34:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:09.227 00:34:56 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:09.227 00:34:56 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:09.227 00:34:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:09.227 00:34:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:09.227 00:34:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:09.227 00:34:56 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:09.227 00:34:56 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.485 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:09.485 ... 00:26:09.485 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:09.485 ... 00:26:09.485 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:09.485 ... 00:26:09.485 fio-3.35 00:26:09.485 Starting 24 threads 00:26:10.050 [2024-07-13 00:34:57.220885] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:10.050 [2024-07-13 00:34:57.220966] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:22.259 00:26:22.259 filename0: (groupid=0, jobs=1): err= 0: pid=101985: Sat Jul 13 00:35:07 2024 00:26:22.259 read: IOPS=216, BW=865KiB/s (886kB/s)(8660KiB/10006msec) 00:26:22.259 slat (usec): min=4, max=8024, avg=20.66, stdev=210.97 00:26:22.259 clat (msec): min=3, max=140, avg=73.81, stdev=22.25 00:26:22.259 lat (msec): min=3, max=140, avg=73.84, stdev=22.25 00:26:22.259 clat percentiles (msec): 00:26:22.259 | 1.00th=[ 8], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 60], 00:26:22.259 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 79], 00:26:22.259 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 115], 00:26:22.259 | 99.00th=[ 129], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 140], 00:26:22.259 | 99.99th=[ 140] 00:26:22.259 bw ( KiB/s): min= 648, max= 1072, per=3.86%, avg=849.74, stdev=106.93, samples=19 00:26:22.259 iops : min= 162, max= 268, avg=212.42, stdev=26.73, samples=19 00:26:22.259 lat (msec) : 4=0.28%, 10=1.02%, 20=0.46%, 50=9.61%, 100=75.94% 00:26:22.259 lat (msec) : 250=12.70% 00:26:22.259 cpu : usr=39.26%, sys=0.61%, ctx=1070, majf=0, minf=9 00:26:22.259 IO depths : 1=1.6%, 2=3.7%, 4=12.1%, 8=70.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:22.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.259 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.259 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.259 filename0: (groupid=0, jobs=1): err= 0: pid=101986: Sat Jul 13 00:35:07 2024 00:26:22.259 read: IOPS=207, BW=829KiB/s (849kB/s)(8288KiB/10001msec) 00:26:22.259 slat (usec): min=4, max=8030, avg=20.12, stdev=204.00 00:26:22.259 clat (msec): min=5, max=158, avg=77.09, stdev=24.57 00:26:22.259 lat (msec): min=5, max=158, avg=77.11, stdev=24.57 00:26:22.259 clat percentiles (msec): 00:26:22.259 | 1.00th=[ 6], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 59], 00:26:22.259 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 81], 00:26:22.259 | 70.00th=[ 89], 80.00th=[ 99], 90.00th=[ 113], 95.00th=[ 121], 00:26:22.259 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 159], 99.95th=[ 159], 00:26:22.259 | 99.99th=[ 159] 00:26:22.259 bw ( KiB/s): min= 512, max= 1024, per=3.67%, avg=808.84, stdev=134.68, samples=19 00:26:22.259 iops : min= 128, max= 256, avg=202.21, stdev=33.67, samples=19 00:26:22.260 lat (msec) : 10=1.54%, 50=6.03%, 100=73.07%, 250=19.35% 00:26:22.260 cpu : usr=44.49%, sys=0.71%, ctx=1324, majf=0, minf=9 00:26:22.260 IO depths : 1=3.0%, 2=6.6%, 4=16.3%, 8=63.6%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:22.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 complete : 0=0.0%, 4=92.1%, 8=3.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 issued rwts: total=2072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.260 filename0: (groupid=0, jobs=1): err= 0: pid=101987: Sat Jul 13 00:35:07 2024 00:26:22.260 read: IOPS=211, BW=848KiB/s (868kB/s)(8480KiB/10003msec) 00:26:22.260 slat (usec): min=7, max=8029, avg=17.56, stdev=174.28 00:26:22.260 clat (msec): min=6, max=162, avg=75.36, stdev=24.42 00:26:22.260 lat (msec): min=6, max=162, avg=75.38, stdev=24.42 00:26:22.260 clat percentiles (msec): 00:26:22.260 | 1.00th=[ 7], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 60], 00:26:22.260 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 83], 00:26:22.260 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 118], 00:26:22.260 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 163], 99.95th=[ 163], 00:26:22.260 | 99.99th=[ 163] 00:26:22.260 bw ( KiB/s): min= 552, max= 1080, per=3.75%, avg=825.05, stdev=137.58, samples=19 00:26:22.260 iops : min= 138, max= 270, avg=206.26, stdev=34.40, samples=19 00:26:22.260 lat (msec) : 10=1.51%, 50=12.55%, 100=71.98%, 250=13.96% 00:26:22.260 cpu : usr=34.85%, sys=0.56%, ctx=907, majf=0, minf=9 00:26:22.260 IO depths : 1=1.7%, 2=4.2%, 4=13.9%, 8=68.9%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:22.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 complete : 0=0.0%, 4=91.0%, 8=3.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.260 filename0: (groupid=0, jobs=1): err= 0: pid=101988: Sat Jul 13 00:35:07 2024 00:26:22.260 read: IOPS=254, BW=1018KiB/s (1043kB/s)(9.97MiB/10028msec) 00:26:22.260 slat (usec): min=4, max=8040, avg=30.40, stdev=363.51 00:26:22.260 clat (msec): min=6, max=147, avg=62.56, stdev=22.29 00:26:22.260 lat (msec): min=6, max=147, avg=62.59, stdev=22.30 00:26:22.260 clat percentiles (msec): 00:26:22.260 | 1.00th=[ 19], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 45], 00:26:22.260 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 67], 00:26:22.260 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 102], 00:26:22.260 | 99.00th=[ 125], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 148], 00:26:22.260 | 99.99th=[ 148] 00:26:22.260 bw ( KiB/s): min= 560, max= 1526, per=4.62%, avg=1017.95, stdev=230.33, samples=20 00:26:22.260 iops : min= 140, max= 381, avg=254.45, stdev=57.54, samples=20 00:26:22.260 lat (msec) : 10=0.63%, 20=1.25%, 50=33.29%, 100=59.34%, 250=5.48% 00:26:22.260 cpu : usr=39.28%, sys=0.85%, ctx=1006, majf=0, minf=9 00:26:22.260 IO depths : 1=1.6%, 2=3.2%, 4=10.1%, 8=73.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:22.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 complete : 0=0.0%, 4=90.0%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.260 filename0: (groupid=0, jobs=1): err= 0: pid=101989: Sat Jul 13 00:35:07 2024 00:26:22.260 read: IOPS=233, BW=934KiB/s (956kB/s)(9376KiB/10039msec) 00:26:22.260 slat (usec): min=4, max=9021, avg=23.56, stdev=298.99 00:26:22.260 clat (msec): min=11, max=151, avg=68.31, stdev=23.69 00:26:22.260 lat (msec): min=11, max=151, avg=68.33, stdev=23.70 00:26:22.260 clat percentiles (msec): 00:26:22.260 | 1.00th=[ 16], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:26:22.260 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:26:22.260 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 101], 95.00th=[ 110], 00:26:22.260 | 99.00th=[ 126], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:26:22.260 | 99.99th=[ 153] 00:26:22.260 bw ( KiB/s): min= 600, max= 1480, per=4.22%, avg=930.90, stdev=195.95, samples=20 00:26:22.260 iops : min= 150, max= 370, avg=232.70, stdev=49.02, samples=20 00:26:22.260 lat (msec) : 20=1.37%, 50=22.95%, 100=66.13%, 250=9.56% 00:26:22.260 cpu : usr=34.27%, sys=0.65%, ctx=942, majf=0, minf=9 00:26:22.260 IO depths : 1=2.0%, 2=4.2%, 4=11.9%, 8=70.2%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:22.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 complete : 0=0.0%, 4=90.6%, 8=4.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 issued rwts: total=2344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.260 filename0: (groupid=0, jobs=1): err= 0: pid=101990: Sat Jul 13 00:35:07 2024 00:26:22.260 read: IOPS=214, BW=858KiB/s (879kB/s)(8584KiB/10003msec) 00:26:22.260 slat (usec): min=5, max=7030, avg=18.98, stdev=174.79 00:26:22.260 clat (msec): min=24, max=158, avg=74.45, stdev=23.29 00:26:22.260 lat (msec): min=24, max=158, avg=74.46, stdev=23.30 00:26:22.260 clat percentiles (msec): 00:26:22.260 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 56], 00:26:22.260 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 77], 00:26:22.260 | 70.00th=[ 86], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 118], 00:26:22.260 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 159], 99.95th=[ 159], 00:26:22.260 | 99.99th=[ 159] 00:26:22.260 bw ( KiB/s): min= 512, max= 1256, per=3.90%, avg=858.11, stdev=173.96, samples=19 00:26:22.260 iops : min= 128, max= 314, avg=214.53, stdev=43.49, samples=19 00:26:22.260 lat (msec) : 50=14.21%, 100=72.27%, 250=13.51% 00:26:22.260 cpu : usr=42.49%, sys=0.71%, ctx=1456, majf=0, minf=9 00:26:22.260 IO depths : 1=2.3%, 2=5.1%, 4=14.5%, 8=67.1%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:22.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.260 filename0: (groupid=0, jobs=1): err= 0: pid=101991: Sat Jul 13 00:35:07 2024 00:26:22.260 read: IOPS=221, BW=886KiB/s (907kB/s)(8880KiB/10020msec) 00:26:22.260 slat (usec): min=4, max=12021, avg=29.07, stdev=381.19 00:26:22.260 clat (msec): min=23, max=155, avg=71.98, stdev=21.93 00:26:22.260 lat (msec): min=23, max=155, avg=72.01, stdev=21.93 00:26:22.260 clat percentiles (msec): 00:26:22.260 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 56], 00:26:22.260 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 73], 00:26:22.260 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 103], 95.00th=[ 109], 00:26:22.260 | 99.00th=[ 131], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 157], 00:26:22.260 | 99.99th=[ 157] 00:26:22.260 bw ( KiB/s): min= 640, max= 1112, per=4.01%, avg=883.25, stdev=131.52, samples=20 00:26:22.260 iops : min= 160, max= 278, avg=220.80, stdev=32.87, samples=20 00:26:22.260 lat (msec) : 50=16.85%, 100=72.25%, 250=10.90% 00:26:22.260 cpu : usr=35.46%, sys=0.59%, ctx=965, majf=0, minf=9 00:26:22.260 IO depths : 1=1.6%, 2=3.8%, 4=11.0%, 8=71.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:22.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.260 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.260 filename0: (groupid=0, jobs=1): err= 0: pid=101992: Sat Jul 13 00:35:07 2024 00:26:22.260 read: IOPS=215, BW=861KiB/s (882kB/s)(8624KiB/10015msec) 00:26:22.260 slat (usec): min=7, max=8024, avg=23.35, stdev=287.30 00:26:22.260 clat (msec): min=24, max=164, avg=74.13, stdev=23.04 00:26:22.260 lat (msec): min=24, max=164, avg=74.15, stdev=23.04 00:26:22.260 clat percentiles (msec): 00:26:22.260 | 1.00th=[ 29], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 58], 00:26:22.260 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 80], 00:26:22.260 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 106], 95.00th=[ 117], 00:26:22.260 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 165], 00:26:22.260 | 99.99th=[ 165] 00:26:22.260 bw ( KiB/s): min= 640, max= 1152, per=3.90%, avg=858.16, stdev=151.54, samples=19 00:26:22.260 iops : min= 160, max= 288, avg=214.53, stdev=37.89, samples=19 00:26:22.261 lat (msec) : 50=14.29%, 100=75.19%, 250=10.53% 00:26:22.261 cpu : usr=33.07%, sys=0.65%, ctx=902, majf=0, minf=9 00:26:22.261 IO depths : 1=0.9%, 2=2.5%, 4=10.0%, 8=73.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:26:22.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.261 filename1: (groupid=0, jobs=1): err= 0: pid=101993: Sat Jul 13 00:35:07 2024 00:26:22.261 read: IOPS=214, BW=858KiB/s (879kB/s)(8580KiB/10001msec) 00:26:22.261 slat (usec): min=7, max=8031, avg=20.85, stdev=244.51 00:26:22.261 clat (msec): min=6, max=187, avg=74.46, stdev=25.84 00:26:22.261 lat (msec): min=6, max=187, avg=74.48, stdev=25.84 00:26:22.261 clat percentiles (msec): 00:26:22.261 | 1.00th=[ 8], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 56], 00:26:22.261 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 82], 00:26:22.261 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:26:22.261 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 188], 99.95th=[ 188], 00:26:22.261 | 99.99th=[ 188] 00:26:22.261 bw ( KiB/s): min= 512, max= 1024, per=3.76%, avg=829.05, stdev=137.20, samples=19 00:26:22.261 iops : min= 128, max= 256, avg=207.26, stdev=34.30, samples=19 00:26:22.261 lat (msec) : 10=1.49%, 50=16.83%, 100=65.97%, 250=15.71% 00:26:22.261 cpu : usr=33.35%, sys=0.53%, ctx=905, majf=0, minf=9 00:26:22.261 IO depths : 1=1.5%, 2=3.6%, 4=12.4%, 8=71.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:22.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 complete : 0=0.0%, 4=90.6%, 8=4.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 issued rwts: total=2145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.261 filename1: (groupid=0, jobs=1): err= 0: pid=101994: Sat Jul 13 00:35:07 2024 00:26:22.261 read: IOPS=229, BW=917KiB/s (939kB/s)(9192KiB/10025msec) 00:26:22.261 slat (usec): min=6, max=8030, avg=21.04, stdev=250.86 00:26:22.261 clat (msec): min=24, max=155, avg=69.58, stdev=21.19 00:26:22.261 lat (msec): min=24, max=155, avg=69.61, stdev=21.19 00:26:22.261 clat percentiles (msec): 00:26:22.261 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 52], 00:26:22.261 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 72], 00:26:22.261 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 109], 00:26:22.261 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:26:22.261 | 99.99th=[ 157] 00:26:22.261 bw ( KiB/s): min= 688, max= 1168, per=4.16%, avg=917.20, stdev=122.71, samples=20 00:26:22.261 iops : min= 172, max= 292, avg=229.30, stdev=30.68, samples=20 00:26:22.261 lat (msec) : 50=17.54%, 100=73.37%, 250=9.09% 00:26:22.261 cpu : usr=40.45%, sys=0.66%, ctx=1093, majf=0, minf=9 00:26:22.261 IO depths : 1=1.3%, 2=3.0%, 4=9.7%, 8=73.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:22.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 complete : 0=0.0%, 4=90.3%, 8=5.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.261 filename1: (groupid=0, jobs=1): err= 0: pid=101995: Sat Jul 13 00:35:07 2024 00:26:22.261 read: IOPS=247, BW=989KiB/s (1013kB/s)(9912KiB/10023msec) 00:26:22.261 slat (usec): min=3, max=10087, avg=16.73, stdev=202.54 00:26:22.261 clat (msec): min=17, max=131, avg=64.53, stdev=18.81 00:26:22.261 lat (msec): min=17, max=131, avg=64.55, stdev=18.81 00:26:22.261 clat percentiles (msec): 00:26:22.261 | 1.00th=[ 31], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 47], 00:26:22.261 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 67], 00:26:22.261 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 101], 00:26:22.261 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 132], 99.95th=[ 132], 00:26:22.261 | 99.99th=[ 132] 00:26:22.261 bw ( KiB/s): min= 640, max= 1296, per=4.49%, avg=988.55, stdev=152.19, samples=20 00:26:22.261 iops : min= 160, max= 324, avg=247.10, stdev=38.03, samples=20 00:26:22.261 lat (msec) : 20=0.65%, 50=25.30%, 100=69.17%, 250=4.88% 00:26:22.261 cpu : usr=44.94%, sys=0.75%, ctx=1578, majf=0, minf=9 00:26:22.261 IO depths : 1=1.8%, 2=3.9%, 4=11.5%, 8=71.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:22.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 issued rwts: total=2478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.261 filename1: (groupid=0, jobs=1): err= 0: pid=101996: Sat Jul 13 00:35:07 2024 00:26:22.261 read: IOPS=214, BW=859KiB/s (879kB/s)(8604KiB/10018msec) 00:26:22.261 slat (usec): min=5, max=8025, avg=24.15, stdev=299.00 00:26:22.261 clat (msec): min=27, max=163, avg=74.28, stdev=22.25 00:26:22.261 lat (msec): min=27, max=163, avg=74.31, stdev=22.25 00:26:22.261 clat percentiles (msec): 00:26:22.261 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 56], 00:26:22.261 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 81], 00:26:22.261 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 102], 95.00th=[ 117], 00:26:22.261 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 165], 99.95th=[ 165], 00:26:22.261 | 99.99th=[ 165] 00:26:22.261 bw ( KiB/s): min= 608, max= 1120, per=3.90%, avg=858.53, stdev=137.48, samples=19 00:26:22.261 iops : min= 152, max= 280, avg=214.63, stdev=34.37, samples=19 00:26:22.261 lat (msec) : 50=14.69%, 100=74.01%, 250=11.30% 00:26:22.261 cpu : usr=32.38%, sys=0.55%, ctx=895, majf=0, minf=9 00:26:22.261 IO depths : 1=1.0%, 2=2.6%, 4=10.9%, 8=72.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:22.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.261 filename1: (groupid=0, jobs=1): err= 0: pid=101997: Sat Jul 13 00:35:07 2024 00:26:22.261 read: IOPS=206, BW=824KiB/s (844kB/s)(8256KiB/10014msec) 00:26:22.261 slat (usec): min=3, max=4034, avg=20.29, stdev=157.17 00:26:22.261 clat (msec): min=26, max=173, avg=77.49, stdev=22.26 00:26:22.261 lat (msec): min=26, max=173, avg=77.51, stdev=22.26 00:26:22.261 clat percentiles (msec): 00:26:22.261 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 62], 00:26:22.261 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 79], 00:26:22.261 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 123], 00:26:22.261 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 174], 99.95th=[ 174], 00:26:22.261 | 99.99th=[ 174] 00:26:22.261 bw ( KiB/s): min= 552, max= 1024, per=3.70%, avg=815.16, stdev=130.50, samples=19 00:26:22.261 iops : min= 138, max= 256, avg=203.79, stdev=32.63, samples=19 00:26:22.261 lat (msec) : 50=6.15%, 100=80.86%, 250=12.98% 00:26:22.261 cpu : usr=49.50%, sys=0.83%, ctx=1256, majf=0, minf=9 00:26:22.261 IO depths : 1=3.3%, 2=7.5%, 4=19.3%, 8=60.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:26:22.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 complete : 0=0.0%, 4=92.3%, 8=2.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.261 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.261 filename1: (groupid=0, jobs=1): err= 0: pid=101998: Sat Jul 13 00:35:07 2024 00:26:22.261 read: IOPS=220, BW=881KiB/s (902kB/s)(8824KiB/10016msec) 00:26:22.261 slat (usec): min=4, max=8036, avg=23.88, stdev=264.00 00:26:22.261 clat (msec): min=19, max=177, avg=72.45, stdev=23.50 00:26:22.261 lat (msec): min=19, max=177, avg=72.47, stdev=23.51 00:26:22.261 clat percentiles (msec): 00:26:22.262 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 57], 00:26:22.262 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 71], 00:26:22.262 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 107], 95.00th=[ 121], 00:26:22.262 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 178], 99.95th=[ 178], 00:26:22.262 | 99.99th=[ 178] 00:26:22.262 bw ( KiB/s): min= 512, max= 1200, per=3.97%, avg=874.95, stdev=170.88, samples=19 00:26:22.262 iops : min= 128, max= 300, avg=218.74, stdev=42.72, samples=19 00:26:22.262 lat (msec) : 20=0.09%, 50=15.82%, 100=72.80%, 250=11.29% 00:26:22.262 cpu : usr=38.61%, sys=0.53%, ctx=1050, majf=0, minf=9 00:26:22.262 IO depths : 1=2.3%, 2=5.0%, 4=14.3%, 8=67.8%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:22.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 complete : 0=0.0%, 4=91.2%, 8=3.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 issued rwts: total=2206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.262 filename1: (groupid=0, jobs=1): err= 0: pid=101999: Sat Jul 13 00:35:07 2024 00:26:22.262 read: IOPS=237, BW=952KiB/s (974kB/s)(9540KiB/10025msec) 00:26:22.262 slat (usec): min=7, max=8037, avg=19.65, stdev=232.23 00:26:22.262 clat (msec): min=24, max=144, avg=67.06, stdev=21.49 00:26:22.262 lat (msec): min=24, max=144, avg=67.08, stdev=21.50 00:26:22.262 clat percentiles (msec): 00:26:22.262 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:26:22.262 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 70], 00:26:22.262 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 107], 00:26:22.262 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:26:22.262 | 99.99th=[ 146] 00:26:22.262 bw ( KiB/s): min= 688, max= 1152, per=4.31%, avg=950.00, stdev=153.62, samples=20 00:26:22.262 iops : min= 172, max= 288, avg=237.50, stdev=38.40, samples=20 00:26:22.262 lat (msec) : 50=23.94%, 100=69.48%, 250=6.58% 00:26:22.262 cpu : usr=33.10%, sys=0.57%, ctx=894, majf=0, minf=9 00:26:22.262 IO depths : 1=0.6%, 2=1.6%, 4=8.0%, 8=76.6%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:22.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 issued rwts: total=2385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.262 filename1: (groupid=0, jobs=1): err= 0: pid=102000: Sat Jul 13 00:35:07 2024 00:26:22.262 read: IOPS=210, BW=840KiB/s (860kB/s)(8416KiB/10017msec) 00:26:22.262 slat (usec): min=6, max=8030, avg=22.77, stdev=212.64 00:26:22.262 clat (msec): min=22, max=166, avg=76.04, stdev=24.42 00:26:22.262 lat (msec): min=22, max=166, avg=76.06, stdev=24.42 00:26:22.262 clat percentiles (msec): 00:26:22.262 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 57], 00:26:22.262 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 80], 00:26:22.262 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 110], 95.00th=[ 120], 00:26:22.262 | 99.00th=[ 136], 99.50th=[ 150], 99.90th=[ 167], 99.95th=[ 167], 00:26:22.262 | 99.99th=[ 167] 00:26:22.262 bw ( KiB/s): min= 552, max= 1120, per=3.76%, avg=827.84, stdev=143.49, samples=19 00:26:22.262 iops : min= 138, max= 280, avg=206.95, stdev=35.88, samples=19 00:26:22.262 lat (msec) : 50=12.88%, 100=69.49%, 250=17.63% 00:26:22.262 cpu : usr=39.33%, sys=0.60%, ctx=1298, majf=0, minf=9 00:26:22.262 IO depths : 1=1.1%, 2=2.7%, 4=10.8%, 8=72.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:22.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 complete : 0=0.0%, 4=90.3%, 8=5.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 issued rwts: total=2104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.262 filename2: (groupid=0, jobs=1): err= 0: pid=102001: Sat Jul 13 00:35:07 2024 00:26:22.262 read: IOPS=227, BW=910KiB/s (932kB/s)(9124KiB/10024msec) 00:26:22.262 slat (usec): min=6, max=11030, avg=23.18, stdev=272.56 00:26:22.262 clat (msec): min=23, max=147, avg=70.16, stdev=22.27 00:26:22.262 lat (msec): min=24, max=147, avg=70.18, stdev=22.28 00:26:22.262 clat percentiles (msec): 00:26:22.262 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 53], 00:26:22.262 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 73], 00:26:22.262 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 112], 00:26:22.262 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 148], 99.95th=[ 148], 00:26:22.262 | 99.99th=[ 148] 00:26:22.262 bw ( KiB/s): min= 634, max= 1280, per=4.11%, avg=905.80, stdev=179.54, samples=20 00:26:22.262 iops : min= 158, max= 320, avg=226.40, stdev=44.89, samples=20 00:26:22.262 lat (msec) : 50=18.06%, 100=72.29%, 250=9.64% 00:26:22.262 cpu : usr=42.64%, sys=0.75%, ctx=1264, majf=0, minf=9 00:26:22.262 IO depths : 1=2.1%, 2=4.7%, 4=13.0%, 8=68.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:22.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 complete : 0=0.0%, 4=91.1%, 8=4.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 issued rwts: total=2281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.262 filename2: (groupid=0, jobs=1): err= 0: pid=102002: Sat Jul 13 00:35:07 2024 00:26:22.262 read: IOPS=234, BW=937KiB/s (959kB/s)(9388KiB/10021msec) 00:26:22.262 slat (usec): min=4, max=8030, avg=22.19, stdev=286.39 00:26:22.262 clat (msec): min=23, max=132, avg=68.12, stdev=21.10 00:26:22.262 lat (msec): min=23, max=133, avg=68.14, stdev=21.09 00:26:22.262 clat percentiles (msec): 00:26:22.262 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:26:22.262 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:26:22.262 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 107], 00:26:22.262 | 99.00th=[ 131], 99.50th=[ 131], 99.90th=[ 133], 99.95th=[ 133], 00:26:22.262 | 99.99th=[ 133] 00:26:22.262 bw ( KiB/s): min= 656, max= 1200, per=4.24%, avg=933.30, stdev=152.43, samples=20 00:26:22.262 iops : min= 164, max= 300, avg=233.30, stdev=38.13, samples=20 00:26:22.262 lat (msec) : 50=23.99%, 100=69.49%, 250=6.52% 00:26:22.262 cpu : usr=32.45%, sys=0.48%, ctx=887, majf=0, minf=9 00:26:22.262 IO depths : 1=0.2%, 2=1.0%, 4=6.8%, 8=77.8%, 16=14.2%, 32=0.0%, >=64=0.0% 00:26:22.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 complete : 0=0.0%, 4=89.7%, 8=6.5%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 issued rwts: total=2347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.262 filename2: (groupid=0, jobs=1): err= 0: pid=102003: Sat Jul 13 00:35:07 2024 00:26:22.262 read: IOPS=246, BW=987KiB/s (1010kB/s)(9868KiB/10001msec) 00:26:22.262 slat (usec): min=7, max=4060, avg=16.64, stdev=134.36 00:26:22.262 clat (msec): min=4, max=161, avg=64.76, stdev=23.17 00:26:22.262 lat (msec): min=4, max=161, avg=64.78, stdev=23.18 00:26:22.262 clat percentiles (msec): 00:26:22.262 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 47], 00:26:22.262 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 68], 00:26:22.262 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 104], 00:26:22.262 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 161], 99.95th=[ 161], 00:26:22.262 | 99.99th=[ 161] 00:26:22.262 bw ( KiB/s): min= 640, max= 1512, per=4.47%, avg=984.84, stdev=209.14, samples=19 00:26:22.262 iops : min= 160, max= 378, avg=246.21, stdev=52.29, samples=19 00:26:22.262 lat (msec) : 10=1.30%, 20=0.93%, 50=28.66%, 100=63.07%, 250=6.04% 00:26:22.262 cpu : usr=36.60%, sys=0.65%, ctx=1134, majf=0, minf=9 00:26:22.262 IO depths : 1=1.1%, 2=2.3%, 4=8.8%, 8=75.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:22.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 issued rwts: total=2467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.262 filename2: (groupid=0, jobs=1): err= 0: pid=102004: Sat Jul 13 00:35:07 2024 00:26:22.262 read: IOPS=237, BW=949KiB/s (972kB/s)(9516KiB/10026msec) 00:26:22.262 slat (usec): min=4, max=8029, avg=21.31, stdev=217.96 00:26:22.262 clat (msec): min=17, max=152, avg=67.27, stdev=23.01 00:26:22.262 lat (msec): min=17, max=152, avg=67.29, stdev=23.02 00:26:22.262 clat percentiles (msec): 00:26:22.262 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:26:22.262 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:26:22.262 | 70.00th=[ 79], 80.00th=[ 88], 90.00th=[ 95], 95.00th=[ 108], 00:26:22.262 | 99.00th=[ 133], 99.50th=[ 148], 99.90th=[ 153], 99.95th=[ 153], 00:26:22.262 | 99.99th=[ 153] 00:26:22.262 bw ( KiB/s): min= 637, max= 1432, per=4.29%, avg=945.05, stdev=215.11, samples=20 00:26:22.262 iops : min= 159, max= 358, avg=236.25, stdev=53.80, samples=20 00:26:22.262 lat (msec) : 20=0.59%, 50=26.99%, 100=65.87%, 250=6.56% 00:26:22.262 cpu : usr=42.30%, sys=0.78%, ctx=1059, majf=0, minf=9 00:26:22.262 IO depths : 1=1.3%, 2=3.3%, 4=11.3%, 8=72.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:22.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.262 issued rwts: total=2379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.262 filename2: (groupid=0, jobs=1): err= 0: pid=102005: Sat Jul 13 00:35:07 2024 00:26:22.262 read: IOPS=249, BW=999KiB/s (1023kB/s)(9.79MiB/10034msec) 00:26:22.262 slat (usec): min=7, max=8032, avg=19.86, stdev=229.52 00:26:22.262 clat (msec): min=6, max=151, avg=63.89, stdev=22.18 00:26:22.262 lat (msec): min=6, max=151, avg=63.91, stdev=22.18 00:26:22.263 clat percentiles (msec): 00:26:22.263 | 1.00th=[ 9], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 47], 00:26:22.263 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 69], 00:26:22.263 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 108], 00:26:22.263 | 99.00th=[ 123], 99.50th=[ 130], 99.90th=[ 153], 99.95th=[ 153], 00:26:22.263 | 99.99th=[ 153] 00:26:22.263 bw ( KiB/s): min= 688, max= 1408, per=4.52%, avg=996.00, stdev=174.18, samples=20 00:26:22.263 iops : min= 172, max= 352, avg=249.00, stdev=43.55, samples=20 00:26:22.263 lat (msec) : 10=1.28%, 20=0.64%, 50=26.58%, 100=64.72%, 250=6.78% 00:26:22.263 cpu : usr=37.71%, sys=0.76%, ctx=837, majf=0, minf=9 00:26:22.263 IO depths : 1=0.6%, 2=1.3%, 4=6.8%, 8=78.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:22.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.263 complete : 0=0.0%, 4=89.3%, 8=6.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.263 issued rwts: total=2506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.263 filename2: (groupid=0, jobs=1): err= 0: pid=102006: Sat Jul 13 00:35:07 2024 00:26:22.263 read: IOPS=239, BW=959KiB/s (982kB/s)(9596KiB/10005msec) 00:26:22.263 slat (usec): min=4, max=8033, avg=17.12, stdev=182.99 00:26:22.263 clat (msec): min=23, max=139, avg=66.59, stdev=23.37 00:26:22.263 lat (msec): min=23, max=139, avg=66.61, stdev=23.37 00:26:22.263 clat percentiles (msec): 00:26:22.263 | 1.00th=[ 31], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 45], 00:26:22.263 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 70], 00:26:22.263 | 70.00th=[ 77], 80.00th=[ 87], 90.00th=[ 100], 95.00th=[ 111], 00:26:22.263 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 140], 00:26:22.263 | 99.99th=[ 140] 00:26:22.263 bw ( KiB/s): min= 512, max= 1328, per=4.37%, avg=962.95, stdev=235.91, samples=19 00:26:22.263 iops : min= 128, max= 332, avg=240.74, stdev=58.98, samples=19 00:26:22.263 lat (msec) : 50=30.68%, 100=61.03%, 250=8.30% 00:26:22.263 cpu : usr=41.74%, sys=0.82%, ctx=1595, majf=0, minf=9 00:26:22.263 IO depths : 1=1.5%, 2=3.8%, 4=13.2%, 8=69.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:22.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.263 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.263 issued rwts: total=2399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.263 filename2: (groupid=0, jobs=1): err= 0: pid=102007: Sat Jul 13 00:35:07 2024 00:26:22.263 read: IOPS=276, BW=1108KiB/s (1134kB/s)(10.9MiB/10039msec) 00:26:22.263 slat (usec): min=4, max=4031, avg=13.48, stdev=76.57 00:26:22.263 clat (usec): min=732, max=141050, avg=57570.77, stdev=28582.85 00:26:22.263 lat (usec): min=739, max=141072, avg=57584.24, stdev=28582.34 00:26:22.263 clat percentiles (usec): 00:26:22.263 | 1.00th=[ 1237], 5.00th=[ 1598], 10.00th=[ 8586], 20.00th=[ 38011], 00:26:22.263 | 30.00th=[ 44827], 40.00th=[ 47973], 50.00th=[ 57934], 60.00th=[ 63177], 00:26:22.263 | 70.00th=[ 69731], 80.00th=[ 80217], 90.00th=[ 95945], 95.00th=[107480], 00:26:22.263 | 99.00th=[126354], 99.50th=[129500], 99.90th=[141558], 99.95th=[141558], 00:26:22.263 | 99.99th=[141558] 00:26:22.263 bw ( KiB/s): min= 640, max= 3600, per=5.04%, avg=1109.20, stdev=619.17, samples=20 00:26:22.263 iops : min= 160, max= 900, avg=277.25, stdev=154.82, samples=20 00:26:22.263 lat (usec) : 750=0.11%, 1000=0.32% 00:26:22.263 lat (msec) : 2=5.50%, 4=1.76%, 10=2.30%, 20=0.43%, 50=32.19% 00:26:22.263 lat (msec) : 100=50.68%, 250=6.69% 00:26:22.263 cpu : usr=40.47%, sys=0.69%, ctx=1261, majf=0, minf=0 00:26:22.263 IO depths : 1=0.3%, 2=0.6%, 4=5.4%, 8=79.3%, 16=14.4%, 32=0.0%, >=64=0.0% 00:26:22.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.263 complete : 0=0.0%, 4=89.2%, 8=7.4%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.263 issued rwts: total=2780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.263 filename2: (groupid=0, jobs=1): err= 0: pid=102008: Sat Jul 13 00:35:07 2024 00:26:22.263 read: IOPS=249, BW=998KiB/s (1022kB/s)(9.77MiB/10025msec) 00:26:22.263 slat (usec): min=4, max=8017, avg=14.95, stdev=160.17 00:26:22.263 clat (msec): min=24, max=120, avg=63.95, stdev=20.24 00:26:22.263 lat (msec): min=24, max=120, avg=63.96, stdev=20.25 00:26:22.263 clat percentiles (msec): 00:26:22.263 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 47], 00:26:22.263 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 69], 00:26:22.263 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 93], 95.00th=[ 102], 00:26:22.263 | 99.00th=[ 115], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:26:22.263 | 99.99th=[ 121] 00:26:22.263 bw ( KiB/s): min= 736, max= 1248, per=4.53%, avg=997.95, stdev=153.10, samples=20 00:26:22.263 iops : min= 184, max= 312, avg=249.45, stdev=38.30, samples=20 00:26:22.263 lat (msec) : 50=33.81%, 100=60.83%, 250=5.36% 00:26:22.263 cpu : usr=32.47%, sys=0.49%, ctx=890, majf=0, minf=9 00:26:22.263 IO depths : 1=0.1%, 2=0.2%, 4=4.8%, 8=80.6%, 16=14.3%, 32=0.0%, >=64=0.0% 00:26:22.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.263 complete : 0=0.0%, 4=88.9%, 8=7.4%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.263 issued rwts: total=2502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.263 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:22.263 00:26:22.263 Run status group 0 (all jobs): 00:26:22.263 READ: bw=21.5MiB/s (22.5MB/s), 824KiB/s-1108KiB/s (844kB/s-1134kB/s), io=216MiB (226MB), run=10001-10039msec 00:26:22.263 00:35:07 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:22.263 00:35:07 -- target/dif.sh@43 -- # local sub 00:26:22.263 00:35:07 -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.263 00:35:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:22.263 00:35:07 -- target/dif.sh@36 -- # local sub_id=0 00:26:22.263 00:35:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:22.263 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.263 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.263 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.263 00:35:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:22.263 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.263 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.263 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.263 00:35:07 -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.263 00:35:07 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:22.263 00:35:07 -- target/dif.sh@36 -- # local sub_id=1 00:26:22.263 00:35:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.263 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.263 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.263 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.263 00:35:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:22.263 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.263 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.263 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.263 00:35:07 -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.263 00:35:07 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:22.263 00:35:07 -- target/dif.sh@36 -- # local sub_id=2 00:26:22.263 00:35:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:22.263 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.263 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.263 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.263 00:35:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:22.263 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.263 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.263 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.263 00:35:07 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:22.263 00:35:07 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:22.264 00:35:07 -- target/dif.sh@115 -- # numjobs=2 00:26:22.264 00:35:07 -- target/dif.sh@115 -- # iodepth=8 00:26:22.264 00:35:07 -- target/dif.sh@115 -- # runtime=5 00:26:22.264 00:35:07 -- target/dif.sh@115 -- # files=1 00:26:22.264 00:35:07 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:22.264 00:35:07 -- target/dif.sh@28 -- # local sub 00:26:22.264 00:35:07 -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.264 00:35:07 -- target/dif.sh@31 -- # create_subsystem 0 00:26:22.264 00:35:07 -- target/dif.sh@18 -- # local sub_id=0 00:26:22.264 00:35:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:22.264 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.264 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.264 bdev_null0 00:26:22.264 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.264 00:35:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:22.264 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.264 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.264 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.264 00:35:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:22.264 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.264 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.264 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.264 00:35:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.264 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.264 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.264 [2024-07-13 00:35:07.769094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.264 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.264 00:35:07 -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.264 00:35:07 -- target/dif.sh@31 -- # create_subsystem 1 00:26:22.264 00:35:07 -- target/dif.sh@18 -- # local sub_id=1 00:26:22.264 00:35:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:22.264 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.264 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.264 bdev_null1 00:26:22.264 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.264 00:35:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:22.264 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.264 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.264 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.264 00:35:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:22.264 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.264 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.264 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.264 00:35:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.264 00:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.264 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:26:22.264 00:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.264 00:35:07 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:22.264 00:35:07 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:22.264 00:35:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:22.264 00:35:07 -- nvmf/common.sh@520 -- # config=() 00:26:22.264 00:35:07 -- nvmf/common.sh@520 -- # local subsystem config 00:26:22.264 00:35:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.264 00:35:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.264 { 00:26:22.264 "params": { 00:26:22.264 "name": "Nvme$subsystem", 00:26:22.264 "trtype": "$TEST_TRANSPORT", 00:26:22.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.264 "adrfam": "ipv4", 00:26:22.264 "trsvcid": "$NVMF_PORT", 00:26:22.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.264 "hdgst": ${hdgst:-false}, 00:26:22.264 "ddgst": ${ddgst:-false} 00:26:22.264 }, 00:26:22.264 "method": "bdev_nvme_attach_controller" 00:26:22.264 } 00:26:22.264 EOF 00:26:22.264 )") 00:26:22.264 00:35:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.264 00:35:07 -- target/dif.sh@82 -- # gen_fio_conf 00:26:22.264 00:35:07 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.264 00:35:07 -- target/dif.sh@54 -- # local file 00:26:22.264 00:35:07 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:22.264 00:35:07 -- target/dif.sh@56 -- # cat 00:26:22.264 00:35:07 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:22.264 00:35:07 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:22.264 00:35:07 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:22.264 00:35:07 -- common/autotest_common.sh@1320 -- # shift 00:26:22.264 00:35:07 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:22.264 00:35:07 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.264 00:35:07 -- nvmf/common.sh@542 -- # cat 00:26:22.264 00:35:07 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:22.264 00:35:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:22.264 00:35:07 -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.264 00:35:07 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:22.264 00:35:07 -- target/dif.sh@73 -- # cat 00:26:22.264 00:35:07 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:22.264 00:35:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:22.264 00:35:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:22.264 { 00:26:22.264 "params": { 00:26:22.264 "name": "Nvme$subsystem", 00:26:22.264 "trtype": "$TEST_TRANSPORT", 00:26:22.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.264 "adrfam": "ipv4", 00:26:22.264 "trsvcid": "$NVMF_PORT", 00:26:22.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.264 "hdgst": ${hdgst:-false}, 00:26:22.264 "ddgst": ${ddgst:-false} 00:26:22.264 }, 00:26:22.264 "method": "bdev_nvme_attach_controller" 00:26:22.264 } 00:26:22.264 EOF 00:26:22.264 )") 00:26:22.264 00:35:07 -- target/dif.sh@72 -- # (( file++ )) 00:26:22.264 00:35:07 -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.264 00:35:07 -- nvmf/common.sh@542 -- # cat 00:26:22.264 00:35:07 -- nvmf/common.sh@544 -- # jq . 00:26:22.264 00:35:07 -- nvmf/common.sh@545 -- # IFS=, 00:26:22.264 00:35:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:22.264 "params": { 00:26:22.264 "name": "Nvme0", 00:26:22.264 "trtype": "tcp", 00:26:22.264 "traddr": "10.0.0.2", 00:26:22.264 "adrfam": "ipv4", 00:26:22.264 "trsvcid": "4420", 00:26:22.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:22.264 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:22.264 "hdgst": false, 00:26:22.264 "ddgst": false 00:26:22.264 }, 00:26:22.264 "method": "bdev_nvme_attach_controller" 00:26:22.264 },{ 00:26:22.264 "params": { 00:26:22.264 "name": "Nvme1", 00:26:22.264 "trtype": "tcp", 00:26:22.264 "traddr": "10.0.0.2", 00:26:22.264 "adrfam": "ipv4", 00:26:22.264 "trsvcid": "4420", 00:26:22.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:22.264 "hdgst": false, 00:26:22.264 "ddgst": false 00:26:22.264 }, 00:26:22.264 "method": "bdev_nvme_attach_controller" 00:26:22.264 }' 00:26:22.264 00:35:07 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:22.264 00:35:07 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:22.264 00:35:07 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.265 00:35:07 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:22.265 00:35:07 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:22.265 00:35:07 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:22.265 00:35:07 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:22.265 00:35:07 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:22.265 00:35:07 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:22.265 00:35:07 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.265 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:22.265 ... 00:26:22.265 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:22.265 ... 00:26:22.265 fio-3.35 00:26:22.265 Starting 4 threads 00:26:22.265 [2024-07-13 00:35:08.501319] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:22.265 [2024-07-13 00:35:08.501403] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:26.457 00:26:26.457 filename0: (groupid=0, jobs=1): err= 0: pid=102135: Sat Jul 13 00:35:13 2024 00:26:26.457 read: IOPS=2158, BW=16.9MiB/s (17.7MB/s)(84.4MiB/5003msec) 00:26:26.457 slat (nsec): min=3551, max=77015, avg=10593.47, stdev=6419.68 00:26:26.457 clat (usec): min=1074, max=5654, avg=3651.66, stdev=155.88 00:26:26.457 lat (usec): min=1083, max=5662, avg=3662.25, stdev=156.14 00:26:26.457 clat percentiles (usec): 00:26:26.457 | 1.00th=[ 3294], 5.00th=[ 3490], 10.00th=[ 3523], 20.00th=[ 3556], 00:26:26.457 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3621], 60.00th=[ 3654], 00:26:26.457 | 70.00th=[ 3687], 80.00th=[ 3752], 90.00th=[ 3818], 95.00th=[ 3884], 00:26:26.457 | 99.00th=[ 4047], 99.50th=[ 4113], 99.90th=[ 4490], 99.95th=[ 4555], 00:26:26.457 | 99.99th=[ 4686] 00:26:26.457 bw ( KiB/s): min=16896, max=17520, per=25.11%, avg=17304.44, stdev=192.84, samples=9 00:26:26.457 iops : min= 2112, max= 2190, avg=2163.00, stdev=24.04, samples=9 00:26:26.457 lat (msec) : 2=0.07%, 4=97.84%, 10=2.08% 00:26:26.457 cpu : usr=95.62%, sys=3.24%, ctx=6, majf=0, minf=0 00:26:26.457 IO depths : 1=9.9%, 2=24.2%, 4=50.8%, 8=15.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.457 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.457 issued rwts: total=10800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:26.457 filename0: (groupid=0, jobs=1): err= 0: pid=102136: Sat Jul 13 00:35:13 2024 00:26:26.457 read: IOPS=2145, BW=16.8MiB/s (17.6MB/s)(83.9MiB/5002msec) 00:26:26.457 slat (usec): min=6, max=411, avg=14.95, stdev= 8.39 00:26:26.457 clat (usec): min=1841, max=6821, avg=3662.01, stdev=244.86 00:26:26.457 lat (usec): min=1854, max=6839, avg=3676.96, stdev=244.83 00:26:26.457 clat percentiles (usec): 00:26:26.457 | 1.00th=[ 3163], 5.00th=[ 3458], 10.00th=[ 3490], 20.00th=[ 3556], 00:26:26.457 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3654], 00:26:26.457 | 70.00th=[ 3687], 80.00th=[ 3752], 90.00th=[ 3818], 95.00th=[ 3982], 00:26:26.457 | 99.00th=[ 4555], 99.50th=[ 4948], 99.90th=[ 6718], 99.95th=[ 6718], 00:26:26.457 | 99.99th=[ 6783] 00:26:26.457 bw ( KiB/s): min=16816, max=17408, per=24.95%, avg=17193.11, stdev=215.71, samples=9 00:26:26.457 iops : min= 2102, max= 2176, avg=2149.11, stdev=26.95, samples=9 00:26:26.457 lat (msec) : 2=0.02%, 4=95.63%, 10=4.35% 00:26:26.457 cpu : usr=93.10%, sys=4.80%, ctx=28, majf=0, minf=9 00:26:26.457 IO depths : 1=3.8%, 2=21.7%, 4=53.2%, 8=21.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.457 complete : 0=0.0%, 4=89.8%, 8=10.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.457 issued rwts: total=10734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:26.457 filename1: (groupid=0, jobs=1): err= 0: pid=102137: Sat Jul 13 00:35:13 2024 00:26:26.457 read: IOPS=2155, BW=16.8MiB/s (17.7MB/s)(84.2MiB/5001msec) 00:26:26.457 slat (nsec): min=6526, max=82665, avg=14515.91, stdev=7050.80 00:26:26.457 clat (usec): min=937, max=6175, avg=3643.71, stdev=182.58 00:26:26.457 lat (usec): min=945, max=6182, avg=3658.23, stdev=182.47 00:26:26.457 clat percentiles (usec): 00:26:26.457 | 1.00th=[ 3261], 5.00th=[ 3458], 10.00th=[ 3490], 20.00th=[ 3556], 00:26:26.457 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3654], 00:26:26.457 | 70.00th=[ 3687], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3884], 00:26:26.457 | 99.00th=[ 4146], 99.50th=[ 4359], 99.90th=[ 5342], 99.95th=[ 5735], 00:26:26.457 | 99.99th=[ 6063] 00:26:26.457 bw ( KiB/s): min=17024, max=17408, per=25.06%, avg=17274.67, stdev=121.33, samples=9 00:26:26.457 iops : min= 2128, max= 2176, avg=2159.33, stdev=15.17, samples=9 00:26:26.457 lat (usec) : 1000=0.05% 00:26:26.457 lat (msec) : 2=0.04%, 4=97.42%, 10=2.50% 00:26:26.457 cpu : usr=94.78%, sys=3.90%, ctx=7, majf=0, minf=9 00:26:26.457 IO depths : 1=8.3%, 2=24.5%, 4=50.5%, 8=16.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.457 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.457 issued rwts: total=10781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:26.457 filename1: (groupid=0, jobs=1): err= 0: pid=102138: Sat Jul 13 00:35:13 2024 00:26:26.457 read: IOPS=2156, BW=16.8MiB/s (17.7MB/s)(84.3MiB/5002msec) 00:26:26.457 slat (nsec): min=6620, max=80723, avg=15102.60, stdev=7288.09 00:26:26.457 clat (usec): min=935, max=5288, avg=3636.06, stdev=149.33 00:26:26.457 lat (usec): min=941, max=5301, avg=3651.16, stdev=149.26 00:26:26.457 clat percentiles (usec): 00:26:26.457 | 1.00th=[ 3359], 5.00th=[ 3458], 10.00th=[ 3490], 20.00th=[ 3556], 00:26:26.457 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3654], 00:26:26.457 | 70.00th=[ 3687], 80.00th=[ 3720], 90.00th=[ 3785], 95.00th=[ 3884], 00:26:26.457 | 99.00th=[ 4047], 99.50th=[ 4146], 99.90th=[ 4555], 99.95th=[ 4621], 00:26:26.457 | 99.99th=[ 4883] 00:26:26.457 bw ( KiB/s): min=16896, max=17408, per=25.08%, avg=17285.33, stdev=157.58, samples=9 00:26:26.457 iops : min= 2112, max= 2176, avg=2160.67, stdev=19.70, samples=9 00:26:26.457 lat (usec) : 1000=0.03% 00:26:26.457 lat (msec) : 4=98.11%, 10=1.86% 00:26:26.457 cpu : usr=94.94%, sys=3.74%, ctx=9, majf=0, minf=0 00:26:26.457 IO depths : 1=11.9%, 2=24.9%, 4=50.1%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.457 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.457 issued rwts: total=10787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:26.457 00:26:26.457 Run status group 0 (all jobs): 00:26:26.457 READ: bw=67.3MiB/s (70.6MB/s), 16.8MiB/s-16.9MiB/s (17.6MB/s-17.7MB/s), io=337MiB (353MB), run=5001-5003msec 00:26:26.715 00:35:13 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:26.715 00:35:13 -- target/dif.sh@43 -- # local sub 00:26:26.715 00:35:13 -- target/dif.sh@45 -- # for sub in "$@" 00:26:26.715 00:35:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:26.715 00:35:13 -- target/dif.sh@36 -- # local sub_id=0 00:26:26.715 00:35:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:26.715 00:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.715 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:26:26.715 00:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.715 00:35:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:26.715 00:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.715 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:26:26.715 00:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.715 00:35:13 -- target/dif.sh@45 -- # for sub in "$@" 00:26:26.715 00:35:13 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:26.715 00:35:13 -- target/dif.sh@36 -- # local sub_id=1 00:26:26.715 00:35:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:26.715 00:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.715 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:26:26.715 00:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.715 00:35:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:26.715 00:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.715 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:26:26.715 00:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.715 00:26:26.715 real 0m23.689s 00:26:26.715 user 2m7.957s 00:26:26.715 sys 0m3.859s 00:26:26.715 00:35:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.715 ************************************ 00:26:26.715 END TEST fio_dif_rand_params 00:26:26.715 ************************************ 00:26:26.715 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:26:26.715 00:35:13 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:26.715 00:35:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:26.715 00:35:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:26.715 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:26:26.715 ************************************ 00:26:26.715 START TEST fio_dif_digest 00:26:26.715 ************************************ 00:26:26.715 00:35:13 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:26:26.715 00:35:13 -- target/dif.sh@123 -- # local NULL_DIF 00:26:26.715 00:35:13 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:26.715 00:35:13 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:26.715 00:35:13 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:26.715 00:35:13 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:26.715 00:35:13 -- target/dif.sh@127 -- # numjobs=3 00:26:26.715 00:35:13 -- target/dif.sh@127 -- # iodepth=3 00:26:26.715 00:35:13 -- target/dif.sh@127 -- # runtime=10 00:26:26.715 00:35:13 -- target/dif.sh@128 -- # hdgst=true 00:26:26.715 00:35:13 -- target/dif.sh@128 -- # ddgst=true 00:26:26.715 00:35:13 -- target/dif.sh@130 -- # create_subsystems 0 00:26:26.715 00:35:13 -- target/dif.sh@28 -- # local sub 00:26:26.715 00:35:13 -- target/dif.sh@30 -- # for sub in "$@" 00:26:26.715 00:35:13 -- target/dif.sh@31 -- # create_subsystem 0 00:26:26.715 00:35:13 -- target/dif.sh@18 -- # local sub_id=0 00:26:26.715 00:35:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:26.715 00:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.715 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:26:26.974 bdev_null0 00:26:26.974 00:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.974 00:35:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:26.974 00:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.974 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:26:26.974 00:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.974 00:35:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:26.974 00:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.974 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:26:26.974 00:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.974 00:35:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:26.974 00:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.974 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:26:26.974 [2024-07-13 00:35:13.966479] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.974 00:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.974 00:35:13 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:26.974 00:35:13 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:26.974 00:35:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:26.974 00:35:13 -- nvmf/common.sh@520 -- # config=() 00:26:26.974 00:35:13 -- nvmf/common.sh@520 -- # local subsystem config 00:26:26.974 00:35:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:26.974 00:35:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:26.974 00:35:13 -- target/dif.sh@82 -- # gen_fio_conf 00:26:26.974 00:35:13 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:26.974 00:35:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:26.974 { 00:26:26.974 "params": { 00:26:26.974 "name": "Nvme$subsystem", 00:26:26.974 "trtype": "$TEST_TRANSPORT", 00:26:26.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.974 "adrfam": "ipv4", 00:26:26.974 "trsvcid": "$NVMF_PORT", 00:26:26.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.974 "hdgst": ${hdgst:-false}, 00:26:26.974 "ddgst": ${ddgst:-false} 00:26:26.974 }, 00:26:26.974 "method": "bdev_nvme_attach_controller" 00:26:26.974 } 00:26:26.974 EOF 00:26:26.974 )") 00:26:26.974 00:35:13 -- target/dif.sh@54 -- # local file 00:26:26.974 00:35:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:26.974 00:35:13 -- target/dif.sh@56 -- # cat 00:26:26.974 00:35:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:26.974 00:35:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:26.974 00:35:13 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:26.974 00:35:13 -- common/autotest_common.sh@1320 -- # shift 00:26:26.974 00:35:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:26.974 00:35:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.974 00:35:13 -- nvmf/common.sh@542 -- # cat 00:26:26.974 00:35:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:26.974 00:35:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:26.974 00:35:13 -- target/dif.sh@72 -- # (( file <= files )) 00:26:26.974 00:35:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:26.974 00:35:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:26.974 00:35:13 -- nvmf/common.sh@544 -- # jq . 00:26:26.974 00:35:13 -- nvmf/common.sh@545 -- # IFS=, 00:26:26.974 00:35:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:26.974 "params": { 00:26:26.974 "name": "Nvme0", 00:26:26.974 "trtype": "tcp", 00:26:26.974 "traddr": "10.0.0.2", 00:26:26.974 "adrfam": "ipv4", 00:26:26.974 "trsvcid": "4420", 00:26:26.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:26.974 "hdgst": true, 00:26:26.974 "ddgst": true 00:26:26.974 }, 00:26:26.974 "method": "bdev_nvme_attach_controller" 00:26:26.974 }' 00:26:26.974 00:35:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:26.975 00:35:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:26.975 00:35:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.975 00:35:14 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:26.975 00:35:14 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:26.975 00:35:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:26.975 00:35:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:26.975 00:35:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:26.975 00:35:14 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:26.975 00:35:14 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:26.975 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:26.975 ... 00:26:26.975 fio-3.35 00:26:26.975 Starting 3 threads 00:26:27.542 [2024-07-13 00:35:14.560153] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:27.542 [2024-07-13 00:35:14.560226] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:37.515 00:26:37.515 filename0: (groupid=0, jobs=1): err= 0: pid=102243: Sat Jul 13 00:35:24 2024 00:26:37.515 read: IOPS=273, BW=34.1MiB/s (35.8MB/s)(342MiB/10004msec) 00:26:37.515 slat (nsec): min=6930, max=71421, avg=18877.07, stdev=7940.49 00:26:37.515 clat (usec): min=8176, max=52603, avg=10956.23, stdev=1574.81 00:26:37.515 lat (usec): min=8197, max=52626, avg=10975.11, stdev=1574.69 00:26:37.515 clat percentiles (usec): 00:26:37.515 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:26:37.515 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:26:37.515 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:26:37.515 | 99.00th=[12780], 99.50th=[13173], 99.90th=[50594], 99.95th=[51119], 00:26:37.515 | 99.99th=[52691] 00:26:37.515 bw ( KiB/s): min=33280, max=36096, per=38.30%, avg=34983.89, stdev=744.40, samples=19 00:26:37.515 iops : min= 260, max= 282, avg=273.26, stdev= 5.86, samples=19 00:26:37.515 lat (msec) : 10=12.48%, 20=87.41%, 100=0.11% 00:26:37.515 cpu : usr=93.15%, sys=5.03%, ctx=7, majf=0, minf=0 00:26:37.515 IO depths : 1=3.0%, 2=97.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:37.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.515 issued rwts: total=2733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:37.515 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:37.515 filename0: (groupid=0, jobs=1): err= 0: pid=102244: Sat Jul 13 00:35:24 2024 00:26:37.515 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(292MiB/10004msec) 00:26:37.515 slat (nsec): min=6630, max=63101, avg=15253.52, stdev=7061.87 00:26:37.515 clat (usec): min=4596, max=17697, avg=12815.31, stdev=1124.65 00:26:37.515 lat (usec): min=4603, max=17729, avg=12830.56, stdev=1125.36 00:26:37.515 clat percentiles (usec): 00:26:37.515 | 1.00th=[10421], 5.00th=[11207], 10.00th=[11600], 20.00th=[11994], 00:26:37.515 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:26:37.516 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14222], 95.00th=[14615], 00:26:37.516 | 99.00th=[15270], 99.50th=[15401], 99.90th=[16581], 99.95th=[16581], 00:26:37.516 | 99.99th=[17695] 00:26:37.516 bw ( KiB/s): min=27703, max=31744, per=32.69%, avg=29854.32, stdev=839.48, samples=19 00:26:37.516 iops : min= 216, max= 248, avg=233.21, stdev= 6.62, samples=19 00:26:37.516 lat (msec) : 10=0.86%, 20=99.14% 00:26:37.516 cpu : usr=94.97%, sys=3.71%, ctx=13, majf=0, minf=9 00:26:37.516 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:37.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.516 issued rwts: total=2338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:37.516 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:37.516 filename0: (groupid=0, jobs=1): err= 0: pid=102245: Sat Jul 13 00:35:24 2024 00:26:37.516 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(258MiB/10002msec) 00:26:37.516 slat (nsec): min=6595, max=69694, avg=14616.86, stdev=6895.89 00:26:37.516 clat (usec): min=8557, max=17373, avg=14495.12, stdev=853.15 00:26:37.516 lat (usec): min=8567, max=17416, avg=14509.74, stdev=854.06 00:26:37.516 clat percentiles (usec): 00:26:37.516 | 1.00th=[12780], 5.00th=[13304], 10.00th=[13566], 20.00th=[13829], 00:26:37.516 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:26:37.516 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15533], 95.00th=[15795], 00:26:37.516 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17171], 99.95th=[17433], 00:26:37.516 | 99.99th=[17433] 00:26:37.516 bw ( KiB/s): min=25344, max=27648, per=28.93%, avg=26421.89, stdev=612.85, samples=19 00:26:37.516 iops : min= 198, max= 216, avg=206.42, stdev= 4.79, samples=19 00:26:37.516 lat (msec) : 10=0.53%, 20=99.47% 00:26:37.516 cpu : usr=95.35%, sys=3.48%, ctx=13, majf=0, minf=9 00:26:37.516 IO depths : 1=8.6%, 2=91.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:37.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.516 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:37.516 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:37.516 00:26:37.516 Run status group 0 (all jobs): 00:26:37.516 READ: bw=89.2MiB/s (93.5MB/s), 25.8MiB/s-34.1MiB/s (27.1MB/s-35.8MB/s), io=892MiB (936MB), run=10002-10004msec 00:26:37.775 00:35:24 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:37.775 00:35:24 -- target/dif.sh@43 -- # local sub 00:26:37.775 00:35:24 -- target/dif.sh@45 -- # for sub in "$@" 00:26:37.775 00:35:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:37.775 00:35:24 -- target/dif.sh@36 -- # local sub_id=0 00:26:37.775 00:35:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:37.775 00:35:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:37.775 00:35:24 -- common/autotest_common.sh@10 -- # set +x 00:26:37.775 00:35:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:37.775 00:35:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:37.775 00:35:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:37.775 00:35:24 -- common/autotest_common.sh@10 -- # set +x 00:26:37.775 ************************************ 00:26:37.775 END TEST fio_dif_digest 00:26:37.775 ************************************ 00:26:37.775 00:35:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:37.775 00:26:37.775 real 0m10.980s 00:26:37.775 user 0m28.944s 00:26:37.775 sys 0m1.529s 00:26:37.775 00:35:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.775 00:35:24 -- common/autotest_common.sh@10 -- # set +x 00:26:37.775 00:35:24 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:37.775 00:35:24 -- target/dif.sh@147 -- # nvmftestfini 00:26:37.775 00:35:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:37.775 00:35:24 -- nvmf/common.sh@116 -- # sync 00:26:38.033 00:35:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:38.033 00:35:25 -- nvmf/common.sh@119 -- # set +e 00:26:38.033 00:35:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:38.033 00:35:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:38.033 rmmod nvme_tcp 00:26:38.033 rmmod nvme_fabrics 00:26:38.033 rmmod nvme_keyring 00:26:38.033 00:35:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:38.033 00:35:25 -- nvmf/common.sh@123 -- # set -e 00:26:38.033 00:35:25 -- nvmf/common.sh@124 -- # return 0 00:26:38.033 00:35:25 -- nvmf/common.sh@477 -- # '[' -n 101482 ']' 00:26:38.033 00:35:25 -- nvmf/common.sh@478 -- # killprocess 101482 00:26:38.033 00:35:25 -- common/autotest_common.sh@926 -- # '[' -z 101482 ']' 00:26:38.033 00:35:25 -- common/autotest_common.sh@930 -- # kill -0 101482 00:26:38.033 00:35:25 -- common/autotest_common.sh@931 -- # uname 00:26:38.033 00:35:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:38.033 00:35:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101482 00:26:38.033 killing process with pid 101482 00:26:38.033 00:35:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:38.033 00:35:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:38.033 00:35:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101482' 00:26:38.033 00:35:25 -- common/autotest_common.sh@945 -- # kill 101482 00:26:38.033 00:35:25 -- common/autotest_common.sh@950 -- # wait 101482 00:26:38.292 00:35:25 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:38.292 00:35:25 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:38.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:38.551 Waiting for block devices as requested 00:26:38.810 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:38.810 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:38.810 00:35:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:38.810 00:35:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:38.810 00:35:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.810 00:35:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:38.810 00:35:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.810 00:35:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:38.810 00:35:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.810 00:35:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:38.810 00:26:38.810 real 1m0.160s 00:26:38.810 user 3m53.556s 00:26:38.810 sys 0m13.152s 00:26:38.810 00:35:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.810 00:35:26 -- common/autotest_common.sh@10 -- # set +x 00:26:38.810 ************************************ 00:26:38.810 END TEST nvmf_dif 00:26:38.810 ************************************ 00:26:39.069 00:35:26 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:39.069 00:35:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:39.069 00:35:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:39.069 00:35:26 -- common/autotest_common.sh@10 -- # set +x 00:26:39.069 ************************************ 00:26:39.069 START TEST nvmf_abort_qd_sizes 00:26:39.069 ************************************ 00:26:39.069 00:35:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:39.069 * Looking for test storage... 00:26:39.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:39.069 00:35:26 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:39.069 00:35:26 -- nvmf/common.sh@7 -- # uname -s 00:26:39.069 00:35:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.069 00:35:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.069 00:35:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.070 00:35:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.070 00:35:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.070 00:35:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.070 00:35:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.070 00:35:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.070 00:35:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.070 00:35:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.070 00:35:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 00:26:39.070 00:35:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=b51f2fb3-a914-4041-8557-0311547dd192 00:26:39.070 00:35:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.070 00:35:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.070 00:35:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:39.070 00:35:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:39.070 00:35:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.070 00:35:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.070 00:35:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.070 00:35:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.070 00:35:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.070 00:35:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.070 00:35:26 -- paths/export.sh@5 -- # export PATH 00:26:39.070 00:35:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.070 00:35:26 -- nvmf/common.sh@46 -- # : 0 00:26:39.070 00:35:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:39.070 00:35:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:39.070 00:35:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:39.070 00:35:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.070 00:35:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.070 00:35:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:39.070 00:35:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:39.070 00:35:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:39.070 00:35:26 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:39.070 00:35:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:39.070 00:35:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.070 00:35:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:39.070 00:35:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:39.070 00:35:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:39.070 00:35:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.070 00:35:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:39.070 00:35:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.070 00:35:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:39.070 00:35:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:39.070 00:35:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:39.070 00:35:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:39.070 00:35:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:39.070 00:35:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:39.070 00:35:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.070 00:35:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.070 00:35:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:39.070 00:35:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:39.070 00:35:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:39.070 00:35:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:39.070 00:35:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:39.070 00:35:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.070 00:35:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:39.070 00:35:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:39.070 00:35:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:39.070 00:35:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:39.070 00:35:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:39.070 00:35:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:39.070 Cannot find device "nvmf_tgt_br" 00:26:39.070 00:35:26 -- nvmf/common.sh@154 -- # true 00:26:39.070 00:35:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:39.070 Cannot find device "nvmf_tgt_br2" 00:26:39.070 00:35:26 -- nvmf/common.sh@155 -- # true 00:26:39.070 00:35:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:39.070 00:35:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:39.070 Cannot find device "nvmf_tgt_br" 00:26:39.070 00:35:26 -- nvmf/common.sh@157 -- # true 00:26:39.070 00:35:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:39.070 Cannot find device "nvmf_tgt_br2" 00:26:39.070 00:35:26 -- nvmf/common.sh@158 -- # true 00:26:39.070 00:35:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:39.070 00:35:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:39.329 00:35:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:39.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:39.329 00:35:26 -- nvmf/common.sh@161 -- # true 00:26:39.329 00:35:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:39.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:39.329 00:35:26 -- nvmf/common.sh@162 -- # true 00:26:39.329 00:35:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:39.329 00:35:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:39.329 00:35:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:39.329 00:35:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:39.329 00:35:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:39.329 00:35:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:39.329 00:35:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:39.329 00:35:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:39.329 00:35:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:39.329 00:35:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:39.329 00:35:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:39.330 00:35:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:39.330 00:35:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:39.330 00:35:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:39.330 00:35:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:39.330 00:35:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:39.330 00:35:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:39.330 00:35:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:39.330 00:35:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:39.330 00:35:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:39.330 00:35:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:39.330 00:35:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:39.330 00:35:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:39.330 00:35:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:39.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:26:39.330 00:26:39.330 --- 10.0.0.2 ping statistics --- 00:26:39.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.330 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:39.330 00:35:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:39.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:39.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:26:39.330 00:26:39.330 --- 10.0.0.3 ping statistics --- 00:26:39.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.330 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:26:39.330 00:35:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:39.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:39.330 00:26:39.330 --- 10.0.0.1 ping statistics --- 00:26:39.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.330 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:39.330 00:35:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.330 00:35:26 -- nvmf/common.sh@421 -- # return 0 00:26:39.330 00:35:26 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:39.330 00:35:26 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:39.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:40.253 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:40.253 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:40.253 00:35:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.253 00:35:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:40.253 00:35:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:40.253 00:35:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.253 00:35:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:40.253 00:35:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:40.253 00:35:27 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:40.253 00:35:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:40.253 00:35:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:40.253 00:35:27 -- common/autotest_common.sh@10 -- # set +x 00:26:40.253 00:35:27 -- nvmf/common.sh@469 -- # nvmfpid=102839 00:26:40.253 00:35:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:40.253 00:35:27 -- nvmf/common.sh@470 -- # waitforlisten 102839 00:26:40.253 00:35:27 -- common/autotest_common.sh@819 -- # '[' -z 102839 ']' 00:26:40.253 00:35:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.253 00:35:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:40.253 00:35:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.253 00:35:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:40.253 00:35:27 -- common/autotest_common.sh@10 -- # set +x 00:26:40.513 [2024-07-13 00:35:27.525929] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:40.513 [2024-07-13 00:35:27.526351] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.513 [2024-07-13 00:35:27.671901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.772 [2024-07-13 00:35:27.794729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:40.772 [2024-07-13 00:35:27.795250] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.772 [2024-07-13 00:35:27.795452] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.772 [2024-07-13 00:35:27.795632] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.772 [2024-07-13 00:35:27.795920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.772 [2024-07-13 00:35:27.796076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.772 [2024-07-13 00:35:27.796182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.772 [2024-07-13 00:35:27.796184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:41.340 00:35:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:41.340 00:35:28 -- common/autotest_common.sh@852 -- # return 0 00:26:41.340 00:35:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:41.340 00:35:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:41.340 00:35:28 -- common/autotest_common.sh@10 -- # set +x 00:26:41.340 00:35:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.340 00:35:28 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:41.340 00:35:28 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:41.340 00:35:28 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:41.598 00:35:28 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:41.598 00:35:28 -- scripts/common.sh@312 -- # local nvmes 00:26:41.598 00:35:28 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:41.598 00:35:28 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:41.598 00:35:28 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:41.598 00:35:28 -- scripts/common.sh@297 -- # local bdf= 00:26:41.598 00:35:28 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:41.598 00:35:28 -- scripts/common.sh@232 -- # local class 00:26:41.598 00:35:28 -- scripts/common.sh@233 -- # local subclass 00:26:41.598 00:35:28 -- scripts/common.sh@234 -- # local progif 00:26:41.598 00:35:28 -- scripts/common.sh@235 -- # printf %02x 1 00:26:41.598 00:35:28 -- scripts/common.sh@235 -- # class=01 00:26:41.598 00:35:28 -- scripts/common.sh@236 -- # printf %02x 8 00:26:41.598 00:35:28 -- scripts/common.sh@236 -- # subclass=08 00:26:41.598 00:35:28 -- scripts/common.sh@237 -- # printf %02x 2 00:26:41.598 00:35:28 -- scripts/common.sh@237 -- # progif=02 00:26:41.598 00:35:28 -- scripts/common.sh@239 -- # hash lspci 00:26:41.598 00:35:28 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:41.598 00:35:28 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:41.598 00:35:28 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:41.598 00:35:28 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:41.598 00:35:28 -- scripts/common.sh@244 -- # tr -d '"' 00:26:41.598 00:35:28 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:41.598 00:35:28 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:41.598 00:35:28 -- scripts/common.sh@15 -- # local i 00:26:41.598 00:35:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:41.598 00:35:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:41.598 00:35:28 -- scripts/common.sh@24 -- # return 0 00:26:41.598 00:35:28 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:41.598 00:35:28 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:41.598 00:35:28 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:41.598 00:35:28 -- scripts/common.sh@15 -- # local i 00:26:41.598 00:35:28 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:41.598 00:35:28 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:41.598 00:35:28 -- scripts/common.sh@24 -- # return 0 00:26:41.598 00:35:28 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:41.598 00:35:28 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:41.598 00:35:28 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:41.598 00:35:28 -- scripts/common.sh@322 -- # uname -s 00:26:41.598 00:35:28 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:41.598 00:35:28 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:41.598 00:35:28 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:41.598 00:35:28 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:41.598 00:35:28 -- scripts/common.sh@322 -- # uname -s 00:26:41.598 00:35:28 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:41.598 00:35:28 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:41.598 00:35:28 -- scripts/common.sh@327 -- # (( 2 )) 00:26:41.598 00:35:28 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:41.598 00:35:28 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:41.598 00:35:28 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:41.598 00:35:28 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:41.598 00:35:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:41.598 00:35:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:41.598 00:35:28 -- common/autotest_common.sh@10 -- # set +x 00:26:41.598 ************************************ 00:26:41.598 START TEST spdk_target_abort 00:26:41.598 ************************************ 00:26:41.598 00:35:28 -- common/autotest_common.sh@1104 -- # spdk_target 00:26:41.598 00:35:28 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:41.598 00:35:28 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:41.598 00:35:28 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:41.598 00:35:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.598 00:35:28 -- common/autotest_common.sh@10 -- # set +x 00:26:41.598 spdk_targetn1 00:26:41.598 00:35:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.598 00:35:28 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:41.598 00:35:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.598 00:35:28 -- common/autotest_common.sh@10 -- # set +x 00:26:41.598 [2024-07-13 00:35:28.704174] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.598 00:35:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.598 00:35:28 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:41.598 00:35:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.598 00:35:28 -- common/autotest_common.sh@10 -- # set +x 00:26:41.599 00:35:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:41.599 00:35:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.599 00:35:28 -- common/autotest_common.sh@10 -- # set +x 00:26:41.599 00:35:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:41.599 00:35:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.599 00:35:28 -- common/autotest_common.sh@10 -- # set +x 00:26:41.599 [2024-07-13 00:35:28.732364] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.599 00:35:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:41.599 00:35:28 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:44.882 Initializing NVMe Controllers 00:26:44.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:44.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:44.882 Initialization complete. Launching workers. 00:26:44.882 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11703, failed: 0 00:26:44.882 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1176, failed to submit 10527 00:26:44.882 success 768, unsuccess 408, failed 0 00:26:44.882 00:35:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:44.882 00:35:31 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:48.184 Initializing NVMe Controllers 00:26:48.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:48.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:48.184 Initialization complete. Launching workers. 00:26:48.184 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5920, failed: 0 00:26:48.184 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1246, failed to submit 4674 00:26:48.184 success 277, unsuccess 969, failed 0 00:26:48.184 00:35:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:48.184 00:35:35 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:51.472 Initializing NVMe Controllers 00:26:51.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:51.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:51.472 Initialization complete. Launching workers. 00:26:51.472 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31710, failed: 0 00:26:51.472 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2585, failed to submit 29125 00:26:51.472 success 502, unsuccess 2083, failed 0 00:26:51.472 00:35:38 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:51.472 00:35:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.472 00:35:38 -- common/autotest_common.sh@10 -- # set +x 00:26:51.472 00:35:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.472 00:35:38 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:51.472 00:35:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.472 00:35:38 -- common/autotest_common.sh@10 -- # set +x 00:26:52.038 00:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:52.038 00:35:39 -- target/abort_qd_sizes.sh@62 -- # killprocess 102839 00:26:52.039 00:35:39 -- common/autotest_common.sh@926 -- # '[' -z 102839 ']' 00:26:52.039 00:35:39 -- common/autotest_common.sh@930 -- # kill -0 102839 00:26:52.039 00:35:39 -- common/autotest_common.sh@931 -- # uname 00:26:52.039 00:35:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:52.039 00:35:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 102839 00:26:52.039 killing process with pid 102839 00:26:52.039 00:35:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:52.039 00:35:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:52.039 00:35:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 102839' 00:26:52.039 00:35:39 -- common/autotest_common.sh@945 -- # kill 102839 00:26:52.039 00:35:39 -- common/autotest_common.sh@950 -- # wait 102839 00:26:52.297 00:26:52.297 real 0m10.815s 00:26:52.297 user 0m44.182s 00:26:52.297 sys 0m1.656s 00:26:52.297 00:35:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.297 00:35:39 -- common/autotest_common.sh@10 -- # set +x 00:26:52.297 ************************************ 00:26:52.297 END TEST spdk_target_abort 00:26:52.297 ************************************ 00:26:52.297 00:35:39 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:52.297 00:35:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:52.297 00:35:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:52.297 00:35:39 -- common/autotest_common.sh@10 -- # set +x 00:26:52.297 ************************************ 00:26:52.297 START TEST kernel_target_abort 00:26:52.297 ************************************ 00:26:52.297 00:35:39 -- common/autotest_common.sh@1104 -- # kernel_target 00:26:52.297 00:35:39 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:52.297 00:35:39 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:52.297 00:35:39 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:52.298 00:35:39 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:52.298 00:35:39 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:52.298 00:35:39 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:52.298 00:35:39 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:52.298 00:35:39 -- nvmf/common.sh@627 -- # local block nvme 00:26:52.298 00:35:39 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:52.298 00:35:39 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:52.298 00:35:39 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:52.298 00:35:39 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:52.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:52.865 Waiting for block devices as requested 00:26:52.865 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:52.865 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:53.123 00:35:40 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:53.123 00:35:40 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:53.123 00:35:40 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:53.123 00:35:40 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:53.123 00:35:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:53.123 No valid GPT data, bailing 00:26:53.123 00:35:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:53.123 00:35:40 -- scripts/common.sh@393 -- # pt= 00:26:53.123 00:35:40 -- scripts/common.sh@394 -- # return 1 00:26:53.123 00:35:40 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:53.123 00:35:40 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:53.123 00:35:40 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:53.123 00:35:40 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:53.123 00:35:40 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:53.123 00:35:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:53.123 No valid GPT data, bailing 00:26:53.124 00:35:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:53.124 00:35:40 -- scripts/common.sh@393 -- # pt= 00:26:53.124 00:35:40 -- scripts/common.sh@394 -- # return 1 00:26:53.124 00:35:40 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:53.124 00:35:40 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:53.124 00:35:40 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:53.124 00:35:40 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:53.124 00:35:40 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:53.124 00:35:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:53.124 No valid GPT data, bailing 00:26:53.124 00:35:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:53.124 00:35:40 -- scripts/common.sh@393 -- # pt= 00:26:53.124 00:35:40 -- scripts/common.sh@394 -- # return 1 00:26:53.124 00:35:40 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:53.124 00:35:40 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:53.124 00:35:40 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:53.124 00:35:40 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:53.124 00:35:40 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:53.124 00:35:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:53.383 No valid GPT data, bailing 00:26:53.383 00:35:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:53.383 00:35:40 -- scripts/common.sh@393 -- # pt= 00:26:53.383 00:35:40 -- scripts/common.sh@394 -- # return 1 00:26:53.383 00:35:40 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:53.383 00:35:40 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:53.383 00:35:40 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:53.383 00:35:40 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:53.383 00:35:40 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:53.383 00:35:40 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:53.383 00:35:40 -- nvmf/common.sh@654 -- # echo 1 00:26:53.383 00:35:40 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:53.383 00:35:40 -- nvmf/common.sh@656 -- # echo 1 00:26:53.383 00:35:40 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:53.383 00:35:40 -- nvmf/common.sh@663 -- # echo tcp 00:26:53.383 00:35:40 -- nvmf/common.sh@664 -- # echo 4420 00:26:53.383 00:35:40 -- nvmf/common.sh@665 -- # echo ipv4 00:26:53.383 00:35:40 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:53.383 00:35:40 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b51f2fb3-a914-4041-8557-0311547dd192 --hostid=b51f2fb3-a914-4041-8557-0311547dd192 -a 10.0.0.1 -t tcp -s 4420 00:26:53.383 00:26:53.383 Discovery Log Number of Records 2, Generation counter 2 00:26:53.383 =====Discovery Log Entry 0====== 00:26:53.383 trtype: tcp 00:26:53.383 adrfam: ipv4 00:26:53.383 subtype: current discovery subsystem 00:26:53.383 treq: not specified, sq flow control disable supported 00:26:53.383 portid: 1 00:26:53.383 trsvcid: 4420 00:26:53.383 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:53.383 traddr: 10.0.0.1 00:26:53.383 eflags: none 00:26:53.383 sectype: none 00:26:53.383 =====Discovery Log Entry 1====== 00:26:53.383 trtype: tcp 00:26:53.383 adrfam: ipv4 00:26:53.383 subtype: nvme subsystem 00:26:53.383 treq: not specified, sq flow control disable supported 00:26:53.383 portid: 1 00:26:53.383 trsvcid: 4420 00:26:53.383 subnqn: kernel_target 00:26:53.383 traddr: 10.0.0.1 00:26:53.383 eflags: none 00:26:53.383 sectype: none 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:53.383 00:35:40 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:56.660 Initializing NVMe Controllers 00:26:56.660 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:56.660 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:56.660 Initialization complete. Launching workers. 00:26:56.660 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 32604, failed: 0 00:26:56.660 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 32604, failed to submit 0 00:26:56.660 success 0, unsuccess 32604, failed 0 00:26:56.660 00:35:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:56.660 00:35:43 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:59.956 Initializing NVMe Controllers 00:26:59.956 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:59.956 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:59.956 Initialization complete. Launching workers. 00:26:59.956 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68305, failed: 0 00:26:59.956 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 28056, failed to submit 40249 00:26:59.956 success 0, unsuccess 28056, failed 0 00:26:59.956 00:35:46 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:59.956 00:35:46 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:03.243 Initializing NVMe Controllers 00:27:03.243 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:03.243 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:03.243 Initialization complete. Launching workers. 00:27:03.243 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 76834, failed: 0 00:27:03.243 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19166, failed to submit 57668 00:27:03.243 success 0, unsuccess 19166, failed 0 00:27:03.243 00:35:49 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:03.244 00:35:49 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:03.244 00:35:49 -- nvmf/common.sh@677 -- # echo 0 00:27:03.244 00:35:49 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:03.244 00:35:49 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:03.244 00:35:49 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:03.244 00:35:49 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:03.244 00:35:49 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:03.244 00:35:49 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:03.244 ************************************ 00:27:03.244 END TEST kernel_target_abort 00:27:03.244 ************************************ 00:27:03.244 00:27:03.244 real 0m10.527s 00:27:03.244 user 0m5.127s 00:27:03.244 sys 0m2.655s 00:27:03.244 00:35:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.244 00:35:50 -- common/autotest_common.sh@10 -- # set +x 00:27:03.244 00:35:50 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:03.244 00:35:50 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:03.244 00:35:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:03.244 00:35:50 -- nvmf/common.sh@116 -- # sync 00:27:03.244 00:35:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:03.244 00:35:50 -- nvmf/common.sh@119 -- # set +e 00:27:03.244 00:35:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:03.244 00:35:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:03.244 rmmod nvme_tcp 00:27:03.244 rmmod nvme_fabrics 00:27:03.244 rmmod nvme_keyring 00:27:03.244 00:35:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:03.244 Process with pid 102839 is not found 00:27:03.244 00:35:50 -- nvmf/common.sh@123 -- # set -e 00:27:03.244 00:35:50 -- nvmf/common.sh@124 -- # return 0 00:27:03.244 00:35:50 -- nvmf/common.sh@477 -- # '[' -n 102839 ']' 00:27:03.244 00:35:50 -- nvmf/common.sh@478 -- # killprocess 102839 00:27:03.244 00:35:50 -- common/autotest_common.sh@926 -- # '[' -z 102839 ']' 00:27:03.244 00:35:50 -- common/autotest_common.sh@930 -- # kill -0 102839 00:27:03.244 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (102839) - No such process 00:27:03.244 00:35:50 -- common/autotest_common.sh@953 -- # echo 'Process with pid 102839 is not found' 00:27:03.244 00:35:50 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:03.244 00:35:50 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:03.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:03.812 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:03.812 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:03.812 00:35:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:03.812 00:35:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:03.812 00:35:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.812 00:35:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:03.812 00:35:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.812 00:35:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:03.812 00:35:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.812 00:35:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:03.812 00:27:03.812 real 0m24.911s 00:27:03.812 user 0m50.650s 00:27:03.812 sys 0m5.732s 00:27:03.812 ************************************ 00:27:03.812 END TEST nvmf_abort_qd_sizes 00:27:03.812 ************************************ 00:27:03.812 00:35:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.812 00:35:50 -- common/autotest_common.sh@10 -- # set +x 00:27:03.812 00:35:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:03.812 00:35:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:27:03.812 00:35:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:03.812 00:35:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:27:03.812 00:35:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:03.812 00:35:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:27:03.812 00:35:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:03.812 00:35:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:03.812 00:35:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:03.812 00:35:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:03.812 00:35:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:03.812 00:35:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:03.812 00:35:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:03.812 00:35:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:03.812 00:35:51 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:27:03.812 00:35:51 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:27:03.812 00:35:51 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:27:03.812 00:35:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:03.812 00:35:51 -- common/autotest_common.sh@10 -- # set +x 00:27:03.812 00:35:51 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:27:03.812 00:35:51 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:27:03.812 00:35:51 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:27:03.812 00:35:51 -- common/autotest_common.sh@10 -- # set +x 00:27:05.717 INFO: APP EXITING 00:27:05.717 INFO: killing all VMs 00:27:05.717 INFO: killing vhost app 00:27:05.717 INFO: EXIT DONE 00:27:06.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:06.283 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:06.283 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:07.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:07.218 Cleaning 00:27:07.218 Removing: /var/run/dpdk/spdk0/config 00:27:07.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:07.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:07.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:07.218 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:07.218 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:07.218 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:07.218 Removing: /var/run/dpdk/spdk1/config 00:27:07.218 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:07.218 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:07.218 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:07.218 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:07.218 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:07.218 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:07.218 Removing: /var/run/dpdk/spdk2/config 00:27:07.218 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:07.218 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:07.218 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:07.218 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:07.218 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:07.218 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:07.218 Removing: /var/run/dpdk/spdk3/config 00:27:07.218 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:07.218 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:07.218 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:07.218 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:07.218 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:07.218 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:07.218 Removing: /var/run/dpdk/spdk4/config 00:27:07.218 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:07.218 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:07.218 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:07.218 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:07.218 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:07.218 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:07.218 Removing: /dev/shm/nvmf_trace.0 00:27:07.219 Removing: /dev/shm/spdk_tgt_trace.pid67543 00:27:07.219 Removing: /var/run/dpdk/spdk0 00:27:07.219 Removing: /var/run/dpdk/spdk1 00:27:07.219 Removing: /var/run/dpdk/spdk2 00:27:07.219 Removing: /var/run/dpdk/spdk3 00:27:07.219 Removing: /var/run/dpdk/spdk4 00:27:07.219 Removing: /var/run/dpdk/spdk_pid100051 00:27:07.219 Removing: /var/run/dpdk/spdk_pid100337 00:27:07.219 Removing: /var/run/dpdk/spdk_pid100634 00:27:07.219 Removing: /var/run/dpdk/spdk_pid101191 00:27:07.219 Removing: /var/run/dpdk/spdk_pid101196 00:27:07.219 Removing: /var/run/dpdk/spdk_pid101557 00:27:07.219 Removing: /var/run/dpdk/spdk_pid101716 00:27:07.219 Removing: /var/run/dpdk/spdk_pid101873 00:27:07.219 Removing: /var/run/dpdk/spdk_pid101971 00:27:07.219 Removing: /var/run/dpdk/spdk_pid102126 00:27:07.219 Removing: /var/run/dpdk/spdk_pid102235 00:27:07.219 Removing: /var/run/dpdk/spdk_pid102908 00:27:07.219 Removing: /var/run/dpdk/spdk_pid102938 00:27:07.219 Removing: /var/run/dpdk/spdk_pid102979 00:27:07.219 Removing: /var/run/dpdk/spdk_pid103228 00:27:07.219 Removing: /var/run/dpdk/spdk_pid103263 00:27:07.219 Removing: /var/run/dpdk/spdk_pid103298 00:27:07.219 Removing: /var/run/dpdk/spdk_pid67393 00:27:07.219 Removing: /var/run/dpdk/spdk_pid67543 00:27:07.219 Removing: /var/run/dpdk/spdk_pid67843 00:27:07.219 Removing: /var/run/dpdk/spdk_pid68112 00:27:07.219 Removing: /var/run/dpdk/spdk_pid68287 00:27:07.219 Removing: /var/run/dpdk/spdk_pid68368 00:27:07.219 Removing: /var/run/dpdk/spdk_pid68453 00:27:07.219 Removing: /var/run/dpdk/spdk_pid68542 00:27:07.219 Removing: /var/run/dpdk/spdk_pid68581 00:27:07.219 Removing: /var/run/dpdk/spdk_pid68616 00:27:07.219 Removing: /var/run/dpdk/spdk_pid68671 00:27:07.219 Removing: /var/run/dpdk/spdk_pid68760 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69378 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69442 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69511 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69539 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69618 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69646 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69725 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69753 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69812 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69842 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69888 00:27:07.219 Removing: /var/run/dpdk/spdk_pid69918 00:27:07.219 Removing: /var/run/dpdk/spdk_pid70064 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70100 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70168 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70243 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70262 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70326 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70342 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70377 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70396 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70431 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70450 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70485 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70503 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70539 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70553 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70593 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70607 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70642 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70662 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70695 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70716 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70745 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70770 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70799 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70823 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70853 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70867 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70907 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70921 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70961 00:27:07.478 Removing: /var/run/dpdk/spdk_pid70975 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71010 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71029 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71064 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71083 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71118 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71132 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71172 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71189 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71232 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71249 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71292 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71306 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71346 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71360 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71396 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71459 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71569 00:27:07.478 Removing: /var/run/dpdk/spdk_pid71979 00:27:07.478 Removing: /var/run/dpdk/spdk_pid78719 00:27:07.478 Removing: /var/run/dpdk/spdk_pid79066 00:27:07.478 Removing: /var/run/dpdk/spdk_pid81477 00:27:07.478 Removing: /var/run/dpdk/spdk_pid81851 00:27:07.478 Removing: /var/run/dpdk/spdk_pid82104 00:27:07.478 Removing: /var/run/dpdk/spdk_pid82155 00:27:07.478 Removing: /var/run/dpdk/spdk_pid82463 00:27:07.478 Removing: /var/run/dpdk/spdk_pid82512 00:27:07.478 Removing: /var/run/dpdk/spdk_pid82886 00:27:07.478 Removing: /var/run/dpdk/spdk_pid83404 00:27:07.478 Removing: /var/run/dpdk/spdk_pid83839 00:27:07.478 Removing: /var/run/dpdk/spdk_pid84793 00:27:07.478 Removing: /var/run/dpdk/spdk_pid85764 00:27:07.478 Removing: /var/run/dpdk/spdk_pid85879 00:27:07.478 Removing: /var/run/dpdk/spdk_pid85945 00:27:07.478 Removing: /var/run/dpdk/spdk_pid87409 00:27:07.478 Removing: /var/run/dpdk/spdk_pid87649 00:27:07.478 Removing: /var/run/dpdk/spdk_pid88091 00:27:07.478 Removing: /var/run/dpdk/spdk_pid88200 00:27:07.478 Removing: /var/run/dpdk/spdk_pid88355 00:27:07.478 Removing: /var/run/dpdk/spdk_pid88395 00:27:07.478 Removing: /var/run/dpdk/spdk_pid88436 00:27:07.478 Removing: /var/run/dpdk/spdk_pid88486 00:27:07.478 Removing: /var/run/dpdk/spdk_pid88644 00:27:07.478 Removing: /var/run/dpdk/spdk_pid88802 00:27:07.478 Removing: /var/run/dpdk/spdk_pid89066 00:27:07.479 Removing: /var/run/dpdk/spdk_pid89183 00:27:07.479 Removing: /var/run/dpdk/spdk_pid89604 00:27:07.479 Removing: /var/run/dpdk/spdk_pid89979 00:27:07.479 Removing: /var/run/dpdk/spdk_pid89988 00:27:07.479 Removing: /var/run/dpdk/spdk_pid92217 00:27:07.479 Removing: /var/run/dpdk/spdk_pid92525 00:27:07.479 Removing: /var/run/dpdk/spdk_pid93023 00:27:07.479 Removing: /var/run/dpdk/spdk_pid93025 00:27:07.479 Removing: /var/run/dpdk/spdk_pid93363 00:27:07.479 Removing: /var/run/dpdk/spdk_pid93378 00:27:07.737 Removing: /var/run/dpdk/spdk_pid93392 00:27:07.737 Removing: /var/run/dpdk/spdk_pid93423 00:27:07.737 Removing: /var/run/dpdk/spdk_pid93428 00:27:07.737 Removing: /var/run/dpdk/spdk_pid93572 00:27:07.737 Removing: /var/run/dpdk/spdk_pid93580 00:27:07.737 Removing: /var/run/dpdk/spdk_pid93688 00:27:07.737 Removing: /var/run/dpdk/spdk_pid93690 00:27:07.737 Removing: /var/run/dpdk/spdk_pid93793 00:27:07.737 Removing: /var/run/dpdk/spdk_pid93800 00:27:07.737 Removing: /var/run/dpdk/spdk_pid94274 00:27:07.737 Removing: /var/run/dpdk/spdk_pid94317 00:27:07.737 Removing: /var/run/dpdk/spdk_pid94474 00:27:07.737 Removing: /var/run/dpdk/spdk_pid94594 00:27:07.737 Removing: /var/run/dpdk/spdk_pid94993 00:27:07.737 Removing: /var/run/dpdk/spdk_pid95244 00:27:07.737 Removing: /var/run/dpdk/spdk_pid95745 00:27:07.737 Removing: /var/run/dpdk/spdk_pid96304 00:27:07.737 Removing: /var/run/dpdk/spdk_pid96766 00:27:07.737 Removing: /var/run/dpdk/spdk_pid96852 00:27:07.737 Removing: /var/run/dpdk/spdk_pid96942 00:27:07.737 Removing: /var/run/dpdk/spdk_pid97034 00:27:07.737 Removing: /var/run/dpdk/spdk_pid97191 00:27:07.737 Removing: /var/run/dpdk/spdk_pid97287 00:27:07.737 Removing: /var/run/dpdk/spdk_pid97373 00:27:07.737 Removing: /var/run/dpdk/spdk_pid97463 00:27:07.737 Removing: /var/run/dpdk/spdk_pid97804 00:27:07.737 Removing: /var/run/dpdk/spdk_pid98499 00:27:07.737 Removing: /var/run/dpdk/spdk_pid99851 00:27:07.737 Clean 00:27:07.737 killing process with pid 61731 00:27:07.737 killing process with pid 61732 00:27:07.737 00:35:54 -- common/autotest_common.sh@1436 -- # return 0 00:27:07.737 00:35:54 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:27:07.737 00:35:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:07.737 00:35:54 -- common/autotest_common.sh@10 -- # set +x 00:27:07.737 00:35:54 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:27:07.737 00:35:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:07.737 00:35:54 -- common/autotest_common.sh@10 -- # set +x 00:27:07.996 00:35:55 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:07.996 00:35:55 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:07.996 00:35:55 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:07.996 00:35:55 -- spdk/autotest.sh@394 -- # hash lcov 00:27:07.996 00:35:55 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:07.996 00:35:55 -- spdk/autotest.sh@396 -- # hostname 00:27:07.996 00:35:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:08.254 geninfo: WARNING: invalid characters removed from testname! 00:27:30.183 00:36:15 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:32.083 00:36:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:34.611 00:36:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:36.515 00:36:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:39.044 00:36:25 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:40.945 00:36:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:43.476 00:36:30 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:43.476 00:36:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:43.476 00:36:30 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:43.476 00:36:30 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.476 00:36:30 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.476 00:36:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.476 00:36:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.476 00:36:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.476 00:36:30 -- paths/export.sh@5 -- $ export PATH 00:27:43.476 00:36:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.476 00:36:30 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:43.476 00:36:30 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:43.476 00:36:30 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720830990.XXXXXX 00:27:43.476 00:36:30 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720830990.Pb2Nxo 00:27:43.476 00:36:30 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:43.476 00:36:30 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:27:43.476 00:36:30 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:43.476 00:36:30 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:43.476 00:36:30 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:43.476 00:36:30 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:43.476 00:36:30 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:43.477 00:36:30 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:27:43.477 00:36:30 -- common/autotest_common.sh@10 -- $ set +x 00:27:43.477 00:36:30 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:43.477 00:36:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:43.477 00:36:30 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:43.477 00:36:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:43.477 00:36:30 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:43.477 00:36:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:43.477 00:36:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:43.477 00:36:30 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:43.477 00:36:30 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:43.477 00:36:30 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:43.477 00:36:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:43.477 + [[ -n 5864 ]] 00:27:43.477 + sudo kill 5864 00:27:43.488 [Pipeline] } 00:27:43.512 [Pipeline] // timeout 00:27:43.518 [Pipeline] } 00:27:43.536 [Pipeline] // stage 00:27:43.542 [Pipeline] } 00:27:43.567 [Pipeline] // catchError 00:27:43.577 [Pipeline] stage 00:27:43.580 [Pipeline] { (Stop VM) 00:27:43.595 [Pipeline] sh 00:27:43.928 + vagrant halt 00:27:47.215 ==> default: Halting domain... 00:27:53.791 [Pipeline] sh 00:27:54.070 + vagrant destroy -f 00:27:57.353 ==> default: Removing domain... 00:27:57.365 [Pipeline] sh 00:27:57.643 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:57.652 [Pipeline] } 00:27:57.670 [Pipeline] // stage 00:27:57.675 [Pipeline] } 00:27:57.691 [Pipeline] // dir 00:27:57.696 [Pipeline] } 00:27:57.712 [Pipeline] // wrap 00:27:57.718 [Pipeline] } 00:27:57.734 [Pipeline] // catchError 00:27:57.742 [Pipeline] stage 00:27:57.744 [Pipeline] { (Epilogue) 00:27:57.757 [Pipeline] sh 00:27:58.037 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:03.344 [Pipeline] catchError 00:28:03.346 [Pipeline] { 00:28:03.363 [Pipeline] sh 00:28:03.644 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:03.902 Artifacts sizes are good 00:28:03.912 [Pipeline] } 00:28:03.930 [Pipeline] // catchError 00:28:03.942 [Pipeline] archiveArtifacts 00:28:03.950 Archiving artifacts 00:28:04.120 [Pipeline] cleanWs 00:28:04.133 [WS-CLEANUP] Deleting project workspace... 00:28:04.133 [WS-CLEANUP] Deferred wipeout is used... 00:28:04.139 [WS-CLEANUP] done 00:28:04.141 [Pipeline] } 00:28:04.161 [Pipeline] // stage 00:28:04.166 [Pipeline] } 00:28:04.183 [Pipeline] // node 00:28:04.188 [Pipeline] End of Pipeline 00:28:04.230 Finished: SUCCESS